Then what is interesting to note,
is that one of the national standardization institutes,
the British standardization Institute namely,
already published the first Standard on how robots and robotic devices need to operate,
and what ethical underlying principles should be used in
designing and applying robots in certain environments.
So, if you would like to consider this you can go into
it and see what they have made and given us
as afterthoughts on what type of information needs to be embedded in
standards on automated driving environments for instance.
Then let's say in concluding this course,
and also in concluding this the future use and role of standards I
went back to Asimov's flaws on what a robot should do,
and what a robot shouldn't do.
It's interesting that in 1950 for instance,
Alan Turing said that,
"When there is a human and a computer,
and they are both answering my questions as an observer,
and a human doesn't know that the computers answering,
at that moment and AI will really be one of the important,
then AI will be like a human being,
that remains to be seen."
Interesting Asimov. On the other hand,
only recently that played a role in the IEEE digital ethics environment,
there have been devised a number of ethical rules that should be
taken into account when making new systems,
when making robots, when making cars that can behave and drive by themselves,
and for instance here you can see that these group of people
consider a robot to be a product not a human being,
but read for yourself what these five ethical rules are,
what they are using as basic principles.
The last approach towards what could be let's
say the safeguarding of our human values could be the UN declaration on human rights,
that we are trying to use all over the world as
a basic principle for behaving nicely towards each other,
and humanly the question I would like to leave this course with,
and like to leave it with you also to start thinking about,
what should we use?
As underlying principles for making computers working for us,
making computer software programs working for us,
how transparent should these be?
What should be the underlying principles?
Should that be the UN Declaration?
Should it be Asimov's laws?
Should it be the ethical rules as devised by the IEEE Working Group?