Ethical Artificial Intelligence in Business
Artificial intelligence (AI) and the data economy together form one of the four grand challenges in the UK Industrial Strategy. Many, if not most, companies are now using AI systems in their daily operations and business processes including, for example, HR function (talent acquisition, employee engagement), customer relations, business intelligence, logistics and supply chain. The increasing commercial interest in the area has led to a deepening awareness that AI raises some serious ethical and business risk issues. For example, many AI-based HR systems for talent acquisition use machine learning to make initial shortlists of applicants using the data from previous recruitment campaigns. However, this will tend to lead to the selection of predominate stereotypes based on historical precedents, exposing the company to the risk of creating an unbalanced workforce, possibly incurring severe brand damage and potentially breaching equality law. Many similar issues have been identified over recent years including AI-based medical systems designed around Caucasian health and life styles, adverts appearing next to inappropriate content on social media and databases for image recognition systems that are racially biased.
In addition to these recognised issues, it is highly likely there are other currently unidentified issues from the deployment of AI systems that could emerge many years after implementation, potentially causing brand damage and increased legal risk. Once these issues come to light, they could be difficult and very expensive to correct as the AI system concerned is likely to have become deeply embedded in the company’s IT architecture. All of these potential risks can reduce business confidence and trust in AI systems.
The business community has started to get to grips with these issues, seeking to introduce standards and regulations which minimise the risk of deploying AI based business solutions. At Oxford Brookes University, we are seeking to support businesses in this endeavour, helping them to understand and plan for both the opportunities and the risks that AI technology presents. We are particularly interested in exploring how AI systems can embody the values of an organisation and operate within its brand. To this end, the university is bringing together a diverse group of world-leading experts who together blend knowledge and skills from technology, business, social science and the life sciences. We seek to offer expertise in areas that include AI and machine learning, psychology, business development, economics and accounting, marketing, gender based law, equality and diversity, coaching and mentoring, digital health and wellbeing. This support can be given to businesses in a number of ways including consultancy, contract research, CPD and training, funded PhD studentships, Knowledge Transfer Partnerships (Innovate UK), and research projects funded by Innovate UK, the research councils and various charities. Come and visit our stand at Venturefest Oxford 2018 where we can discuss these opportunities with you.
Prof Nigel Crook BSc (Hons), PhD, PFHEA, MIEEE
Associate Dean: Research and Knowledge Exchange, Faculty of Technology, Design and Environment
Head of the Cognitive Robotics Research Group
Oxford Brookes University
Robots of Good Character: Equipping robots with moral competence
In his Inaugural Professorial Lecture on the 6th June Nigel Crook explored the possibility of building robots that have good moral character. Prof Crook argued that as robots become increasingly autonomous, humanlike and embedded in society, there will be an expectation and a need for them to be equipped with some degree of moral competence (‘functional morality‘). He also argued that robots should never be regarded as possessing the full moral agency that is commonly attributed to humans.
Prof Crook also includes in his inaugural lecture a summary of his personal and academic journey and his outlines his motivations for embarking on research into so called autonomous moral machines, bringing together concepts from philosophy of ethics, theology and cognitive science.
The video of the lecture can be found here.
Continue reading
Humanoid Robots at Osaka
Visit to Robotics Research Labs at Osaka
Nigel Crook, James Balkwill, and Matthias Rolf visited Osaka University, Japan, in order to establish collaborations in the area of robotics. The team visited the Labs of Prof. Asada (co-founder of the RoboCup soccer), Prof. Hosoda, and Prof. Ishiguro (who famously built an android version of himself, see picture). Topics of discussion included human-robot interactions, robot ethics, and actuator design.
Continue readingRobot body pose mirroring

Fig 1. Experimental room setup
Have you ever caught yourself copying or mirroring the body pose of someone you are in conversation with or noticed that others are copying your body pose?
What’s that all about? It is well known that this body-pose mirroring is a natural and often subconscious social behaviour that can build rapport, increase empathy and facilitate social interaction. So, if we wanted to build rapport, increase empathy and facilitate social interaction between a robot and a person, then body pose mirroring might be worth investigating.
There have been a number of recent studies that have evaluated the influence of a robot’s non-verbal behaviour on the way humans perceive and interact with them. We recently completed one small study which investigated the effect that upper body mimicry has on how people perceive robots. We did by inviting people to a face-to-face interaction with a Nao robot (Fig 1) and then asking them to complete a questionnaire (GODSPEED). We found that when the robot was mirroring the participants’ upper body pose, the participants rated the robot’s humanness more highly and seemed to experience greater empathy with the robot.
More details of this study can be found here.
Fuente, L.A., Ierardi, H., Pilling, M. and Crook, N.T., 2015, October. Influence of upper body pose mirroring in human-robot interaction. In International Conference on Social Robotics (pp. 214-223). Springer, Cham.

Fig 2 Some robot body poses
Continue reading
Discussion on robot ethics

Prof Crook and Dr Rolf visit Prof Jochen Steil at Technische Universität Braunschweig
On the 24th January Prof Nigel Crook and Dr Matthias Rolf travelled to meet Prof Dr Jochen Steil at the Institute for Robotics and Process Control, Technische Universität Braunschweig. The purpose of the visit was to continue our discussions on how we can develop robots with moral competence. We are seeking inspiration from how children at an early age begin to follow social norms within their family circle.
During our visit we were treated to a tour of the robotics lab – Photos below.