Skip to content
Dr Yannis Kalfoglou
  • Musings
  • Speaking
Artificial Intelligence

Self-driving cars: AI’s big bet

  • 06/04/2017

The car has had a remarkable impact on human history in the past 170 years since the first four wheelers hit the roads. From a mechanical challenger to the established means of transport at the turn of the 20th century to its dominant status throughout the 20th century as the core means of land transport for individuals and commercial uses. The car has experienced a wide spectrum of public attention statuses over the past 100 years or so:  an iconic symbol of someone’s wealth, ownership elevates the owner to a higher societal status; a villain responsible for the outrageous pollution caused all over the world by intense car usage, the main reason for our clogged-up journeys to work and leisure; a culprit responsible for the 1.3 million deaths caused by car accidents every year and another 30 million injured. But somehow, our relationship with cars remains undented: there are well over 1.2 billion cars on the roads today, and some estimate that there will be over 2 billion by 2030.

However, one field of technology that captured the imagination and awe of practitioners with its recent meteoric progress and impact on society, artificial intelligence, aims to change our relationship with cars by altering their fundamental principle: from machines driven by humans to machines that are self-driven, rendering humans to mere passengers. That’s a lofty goal, and somehow controversial, but it has merits, advantages and disadvantages.

But first let’s unravel the notion of self-driving and see what it means and what stages it involves before a car become fully autonomous. The society of automotive engineers (SOE) has proposed a taxonomy and definitions for autonomy levels of cars’ automation systems, and national bodies such as the office of science and technology at the UK government below slightly adapt them:

Clearly level 5 hasn’t been achieved yet for large scale commercial use. Level 4 is where most of the action is today. But, we already have automation systems in our cars that continuously evolve: assisted driving functions such as satellite navigation and cruise control are now well-known, and anti-lock braking systems (ABS) – to automatically activate safety mechanisms – have been mandatory on new  passenger cars in some regions. Modern cars contain tens of electronic control units (ECUs), computers which control everything from a car’s engine to onboard entertainment systems. “Drive-by-wire” technology, which replaces traditionally mechanical connections with electrical systems (analogous to aviation’s “fly-by-wire”), has also become increasingly common. In fact, many drivers today are unaware of just how automated their vehicles are.

So where does AI fit in the quest to achieve L4 and L5 autonomy levels? AI has been there for a long time; since the 1960s at least with a focus on computer based guidance systems for the aspiring self-driving cars. Main aspects of the early efforts were to reverse engineer car navigation to a powerful combination of three elements: sensing, processing the outside world signals and make decisions as per next move, and reacting to changes in the environment and if necessary, make appropriate moves (e.g., crash avoidance, emergency stops, manoeuvring). It appears that the most challenging bit is the processing of outside world signals as it is the “understanding” of car’s surrounding environment – the other two steps use conventional technology from radars and laser sensors to mechanical parts administering movement. The AI progress was incremental, rather than revolutionary. In the 1980s E.Dickman’s Mercedes van drove hundreds of motorway miles autonomously, and by 2004 a US DARPA challenge in a desert setting brought many self-driving cars to compete with each other. That pivotal moment, brought many of the breakthroughs we see today. We now have better AI software for road-following and collision avoidance, improved radar and laser sensors, and comprehensive digital mapping technology. And the results are impressive in technology and performance terms for self-driving cars. NVidia’s machine learning algorithms, for example, enable continuous learning and safe updates of a car’s learning algorithms, even in the field. A large number of manufacturers are involved too, with more than 30 global corporations working on AI projects to build fully autonomous cars; and peripherals’ suppliers such as Bosch that works on the radar and other sensors used.  Google’s self-driving cars have driven millions of miles to fine tune their algorithms and Uber’s self-driving cars follow a similar journey. This flurry of activity is evident in the emerging ecosystem of self-driving and autonomous cars “unbundling the car” trend, as market intelligence outfit CBInsights put it: “[they] are working not only in self-driving tech and automated driver assistance but also to improve different services and products associated with the auto industry. These range from high-profile autonomous driving startups like comma.ai, backed by Andreessen Horowitz, to companies focused on enhancing more traditional elements, like auto repair and tire technology.”

 

But we shouldn’t get too excited, yet. There are a lot of obstacles to overcome in order to achieve the fully autonomous, L5, self-driving cars we’re dreaming of. And the reasons are not only current technology limitations.

Legal considerations

It’s easy to imagine bizarre situations when the driver of a car, is not really driving the car – as the car is self-driven – but instead she is physically present, and the car is involved in an accident or other activity that has legal implications. Questions such as who’s at fault when things go wrong, liability for insurance, 3rd party suppliers’ involvement (from road sensors manufacturers and fetch data to the autonomous car to machine vision suppliers that enable the car to “see”), and other need clarifications. At present in the UK at least, primary liability rests with the user of the car, regardless of whether their actions cause the accident or not. If defective technology caused the accident, the user (or their insurer) has to pursue this legally with the manufacturer. In addition, at present drivers are expected to maintain awareness and supervision of their car, even if they are not in control of driving because semi-autonomous features are engaged, for example both Lane Keeping Assist or cruise control. However, this is not as clear cut as it sounds. If we look at L2 automation features, such as (adaptive) cruise control, there is an interesting case in the States with inappropriate use of cruise control: Mrs Merv Grazinski of Oklahoma, sued successfully the manufacturer of her new motorhome after she crashed it at 70mph while making a sandwich! Apparently, she put the motorhome on cruise control and left the steering and driver seat to go and make herself a sandwich. She argued that the manufacturer failed to inform her not to leave the steering when she set it on cruise control. Textbook guidance on cruise control tell us that when used incorrectly could lead to accidents due to several factors, such as: speeding around curves that require slowing down, rough or loose terrain that could negatively affect the cruise control controls and rainy or wet weather could lose traction.

So, in situations like this it appears that inappropriate interpretation of how to use an automation feature is the reason to put liability on the human driver, as they should maintain control of the car at all times. But that’s quickly becoming counterintuitive and blurry when you drive a “self-driven” car, which subconsciously raises the expectations that the car will drive by itself. The analogy of autopilots in aircraft is interesting: autopilots do not replace human pilots, but assist them in controlling the aircraft, allowing them to focus on broader aspects of operation, such as monitoring the trajectory, weather and systems.

It seems that the race to develop L5 autonomy level cars has outpaced the current legislation. When L5 is fully developed, for example, human drivers (or rather users) could be legally permitted to be distracted from driving to do things like send text messages. But this could require fundamental changes to legislation. Further, if laws are changed such that they no longer require any supervision in cars, a change in liability assignment may also be called for to reflect the fact that car drivers/users are not expected to have any control over or awareness of the driving. And to make things even more complicated, issues arising from different country-level, region-level and global legislation for behaviour of car users need to be reconciled. Manufactures of L5 cars might be required to adhere to data management standards for downloading software update patches to autonomous cars whilst in different territories and maybe in network outage areas, and so on. Also, licensing procedures could be affected, as the role of a driver changes to that of a trained user of an autonomous system, something akin to the rigorous training aircraft pilots undergo in order to operate complex autopilot systems.

Moral and ethical dilemmas

Even we find a way to come up with sensible and practical legislation that recognizes the new role of (human) drivers, we still have to grapple with moral and ethical dilemmas. As cars become more autonomous, and have full control, they will be called to act in situations involving inevitable fatalities. An interesting virtual test set up by researchers, is trying to gauge public opinion on what a self-driving car should do in such tricky situations. This moral robo-car test shows that people are in favour of utilitarianism and go for the situations where less human lives are sacrificed. But their feelings are shifting when they are the ones who might be making the sacrifice.

Ethical issues regarding programming for self-driving cars affects the manufacturers. In the absence of legislation, manufacturers will have discretion to make certain ethical decisions regarding the programming of their cars, and consumers will be able to influence programming based on their purchasing preferences. This is going to be tricky, to say the least. Apparently, the survey from the moral robo-car site revealed that people are less likely to buy self-driving cars which have been ethically programmed to act in certain ways. It could then be the case that due to low demand of this type of cars, manufacturers will not make them, which could reduce the overall benefit of self-driving cars.

Tackling moral and ethical dilemmas a priori is difficult. When we learn to drive, we don’t have a predefined, morally acceptable list of outcomes when we faced with such situations. If we are unfortunate to experience one, we act on the spot following our instinct, emotional state at the time, influenced by factors such as adrenaline, and other biological (over-) reactions which are tricky, if not, impossible to interpret, codify and load onto a self-driving car’s memory.

Insurance implications

Car insurance will almost certainly change completely from what we know today, if and when L5 self-driving cars hit the roads. A drastic reduction to car accidents will necessitate the re-thinking of having an insurance in the first place: as human’s affect the risk calculation and drive up or down the premiums. For example, KPMG predicts that car accidents will go down by 80% by 2040 thanks to self-driving cars. However, regardless of who is controlling the car, humans will continue to be present in, or have supervision of, autonomous and unmanned cars. Training and education will be required to ensure that people who interact with these vehicles have the appropriate competence and awareness to ensure safe and responsible operation.

Implications can be drastic for the insurance industry as whole: there could be no claims processors and loss adjustment, but still there will be a need for some sort of insurance (possibly covering theft, vandalism, etc.) Insurance of L5 cars could well be linked to software industry practices and reward, or otherwise, good software updates practice, driving habits (as they will be all monitored with advanced telematics) and other technology related features.

Also, switching between fully and semi-autonomous driving should also be considered in insurance context. There is a risk that a driver could misjudge the responsibility they currently have, or may not adequately understand how to choose different modes of operation of their car or even how to retake control of the car when it is necessary. This may be a more difficult risk to address than it seems – anecdotal evidence suggests that people tend to ‘switch off’ when it seems that their input is not needed.

AI technology is not enough

The challenges listed above are difficult to tackle. At the same time, we push forward with advanced AI techniques making serious inroads toward L5 self-driving cars. It appears that we need to pace our technology progress with progress on all other fronts; legal, moral and process (insurance, infrastructure, etc.). And progress on the other fronts will mean getting that knowledge into our AI-driven cars, thus making a truly intelligent and complex AI to drive flawlessly our L5 self-driving cars.

But, as we strive to get there, we need to be more open and transparent about what works, and what doesn’t. Commercial interests, technological excitement, and heighten expectations at times make us forget that we shouldn’t rush the journey. Getting L5 self-driven cars out there is not a panacea. Some believe is not even necessary. As older cars are not likely to be retrofitted to keep up with their L5 modern siblings, there are also people who enjoy driving in itself – for example, it is estimated that there are more than half a million classic cars in the UK. And a study conducted by the UK’s Automobile Association found that 65% of people liked driving too much to want an autonomous car.  AI’s bet on delivering the core technology that will make, eventually, L5 self-driving car is indeed a big one. Possibly one of its biggest thus far.

Artificial Intelligence

AI’s impact on jobs: (not) the elephant in the…

  • 26/03/2017

 AI driven automation is happening. It benefits the most and it causes anxiety, even fear, to many more. It has captured the imagination of many futurists and occupies prime time in the diaries of core global decision makers at governments, large corporations, policy makers, and every individual. But there is subtle question that bothers me about all this: is AI the elephant in the room? Is AI that out-of-the-ordinary phenomenon that nobody acknowledges, or willing to acknowledge, that finally matured after 60 odd years of hard work and is now capable of automating millions of jobs worldwide that could render tens of millions of people unemployed overnight? Or is it something else that we fail to acknowledge or even grasp?

AI driven automation is here to stay

Undoubtedly, AI is commercially attractive. There are a lot benefits of applying AI driven technologies to automate routine, mundane, repetitive tasks. And of course, that automation is commercially attractive as it can free up time from full time employees (FTEs) to concentrate on higher order tasks or redeploy them to do other things and increase the output and profitability of the business. At least that’s the narrative, the nice-picture outcome. But there is also the ugly picture outcome: AI driven automation is not handled properly, lack of planning or ever resources could lead to no redeployment of roles or there are no higher order tasks to do, and hence leads to straightforward replacement and job losses. That’s the core of the jobs’ fear narrative from some of the sceptics of AI driven automation technology.

If we look at the evolution of the corporate world over the past 20-30 years a lot has changed, and the pace of change is continuing fast. As McGowan points out in her excellent informative piece: “the purpose of the company has changed from one that aggregated work effort in order to optimize productivity and create value for customers to one that aggregates profitability in order to create value for shareholders”  and this has had a profound impact on the treatment of material resources in the 21st century where the intangible products take precedence over material (for example, software vs. brick-n-mortar, and the millennials’ values for shareable use of assets vs. ownership). It has caused to: “shifted the workforce from an asset to develop to a cost to contain as companies created more and more financial value with fewer and fewer humans”

The picture on the left, worth a thousand worlds (as famous ancestors would say): In a space of less than 30 years, leading software companies in the States produced 40x market cap over traditional manufacturing companies with 10x fewer FTEs.

 

Treating your FTEs purely as a cost though, can have profound impact on the future of employment in the face of AI-driven automation, as I will analyse further down. But first, let’s also have a look at the change in jobs’ requirements over the past 30-40 years. It appears that by 2014 over 90 million jobs in the States involved some sort of cognitive intensive function. That almost doubled since early 1980s whereas routine jobs, manual and non-manual, experienced a less impressive growth. 

The growth of cognitive based jobs is not a surprise given the knowledge economies we have today and it also provides an easier understanding on where and how to apply AI driven automation at large: AI is good when we have jobs with frequently high volume tasks, that are of repetitive nature and can be, ideally, codified (e.g., we can express them in a way that machine understands what to do without intuition or human supervision). AI is also good when we have pure brute-force situations, such as speed of calculations, memory capacity, consistency, lack of fatigue. On the other hand, the cognitive intensive tasks that humans perform better, involve soft skills, negotiation, persuasion, situational awareness, cultural sensitivity, historical context, emotional intelligence, problem solving, intuition, empathy, creativity and so on. And it appears that these skills are not only needed for the knowledge intensive job environment of today, but are also highly valued as a supply-demand market making force: the growth of skilled craftsmanship  in core markets, and importantly the shift in public perception and demand for such products is evident, as people turn to hand crafted, artistic and human produced, products, over mass produced, mechanised clones. That’s an interesting trend to watch in the era of full automation and over-supply of mass produced artefacts.

AI driven automation can take its toll even on the producers of such products. The research field of program synthesis (in layman’s jargon, code writing code) is not new, but the recent advances in machine learning, bring a new force to the table: DeepCoder uses a technique that creates new programs by piecing together lines of code taken from existing software – just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall. One advantage of letting an AI loose in this way is that it can search more thoroughly and widely than a human coder, so could piece together source code in a way humans may not have thought of. DeepCoder uses machine learning to scour databases of source code and sort the fragments according to its view of their probable usefulness.

However, the creators of such endeavours are quick to dismiss the possibility of using such techniques to replace human programmers. Rather, it could take away the mundane, repetitive parts and human coders will focus on sophisticated, higher order tasks.

So, there is a pattern emerging from all the narrative about AI driven automation and potential impact on jobs: AI automation of jobs will happen, one way or another, jobs will be impacted, but not lost; rather, they will be redeployed to do higher order tasks, focus on parts of the job which are denser and require cognitive skills that machine don’t possess. All that is good, in theory, but one thing that becomes more and more unclear as the narrative goes and gains traction is: which are these higher order tasks that so many millions of job holders, globally, will be redeployed to do? And how come we haven’t managed to fill up those high order tasks and jobs all these years and only now that finally AI automation is here, we seriously consider them? To tackle these questions, let’s have a look at the nature of employment and jobs:

Revisiting the notion of employment

It’s not only the biological limitations of humans to perform certain tasks at speed (that machine can do faster) that justifies the move to automation. One should look also at the quality of work, employee engagement, sentiment, motivations, etc. It appears that in the wake of 21st century employee engagement and motivation at work is a big issue

  • valuation of knowledge contributions: One of the reasons for this seemingly mismatch in people’s aspirations with the offerings from their job environment is a deeply rooted system of managing employment: it hasn’t changed much over the past century or so. As Fumagalli points out: “The crisis of the labor theory of value derives from the fact that the individual contribution today is not measurable and the output tends to escape a unit of measurement, as production tends to become immaterial.” As we have moved with full speed to the technological abundance of 21st century, it appears that the core deliverables of job holders should no longer be solely measured in terms of materials as a lot work today is immaterial, knowledge-driven, network intensive, and soft skills dependent. Also, interestingly enough, as far human nature is concerned, knowledge and networking is theoretically unlimited so the principle of scarcity that underpins supply and demand no longer holds. Job holders, can and give unlimited value to their employers which go unmeasured. It appears that we need to consider different systems for rewarding contributions as “the only theory of value that appears adequate to contemporary bio-cognitive capitalism, the labor theory of value, is not able to provide one measure.”
  • employee regulations: Most of our labour laws, management regimes, and etiquette were designed and applied at large at the beginning of the last century. They are no longer serving the versatile and dynamic nature of employment today: the one single education stream, one job for life, one pension pot, doesn’t hold true today. Technology is the main culprit for this mismatch: we developed and adopted paradigm shift technologies faster than we can re-think, and re-design our employment systems. Technology gives us a relatively easy landing to the versatile, and ultimately rewarding “gig economy”, yet somehow, we are still struggling to serve that growing sector of our economies with the right laws, frameworks and protocols to make sure that salaried employees and freelancers are treated equally.
  • lifelong learning: As we can no longer apply 20th century practices to meet 21st century demands, employees and job holders need to continuously learn and develop new skills. Experience pays a lot but old knowledge can become obsolete faster than new knowledge is produced and applied in the work environment (e.g., the bizarre situation with COBOL written systems in the financial services is telling: there are not enough COBOL experts left to maintain and change them). Knowing what is new and how it could be applied will be more important. Government think tanks are painting a pretty convincing picture for the future of employees: “Be willing to jump across specialist knowledge boundaries as technologies and disciplines converge, developing a blend of technical training and ‘softer’, collaborative skills.” Making these transitions to other areas of the business is not going to be easy for some, or even feasible given the daily routine of taking care of the business (business-as-usual). So, the AI assault and automation brought by machines, could in theory, free up time and allow employees to learn new things and make the transition, as long as AI jobs automation will lead to redeployment of roles rather than replacement.
  • The demise of a large company? Although distant in the future, and rather provocative as a thought, AI driven automation, employee upskilling, job market pressures, growth of gig economy, new principles and believes of millennials (the largest demographic cohort in today’s work environment and will make up to 60% by 2030) all combined could contribute to the potential demise of large companies as we know them today: global multinationals, with tens and even hundreds of thousands of employees, multibillion dollar revenues but extremely slow to react and averse to risk taking. As the battle for talent continuous relentlessly, and automation flattens the bottom of the pyramid in a job hierarchy, large corporations will struggle to justify huge populations of employees in their payrolls. The capable and lucky ones that manage to redeploy large populations across different functions, could maintain some status of a large company, but most will fail. It appears that a new world order could emerge: one where for each of today’s mega corporations there will be hundreds smaller ones emerge, each one specializing in core function and competencies we typically see inside large corporations. Akin to the practice of outsourcing, this new world order will re-define the boundaries between corporations, shared functions (from marketing and finance to production floors) will become separate companies serving today’ competitors. And the battle for differentiation, market share and standing out will move at a higher order, not on the production floor (in a metaphorical sense, from manufacturing production floors to soft-skilled, knowledge intensive productions); and focus on the core competencies of a corporation: quality, expertise, craftsmanship, customer care, etc.

So, is AI the elephant in the room? 

Let’s revisit the initial probing question: AI could indeed be the elephant in the room; all the signs are there, meteoric growth, transcends industries and sectors, remarkable results compared to human level intelligence, abundance of hardware and software resources to conduct AI at a large scale (with more work to do there), and finally a society that is warmed up enough to the notion of AI as “business-as-usual”. But, my hunch is that the true, and big, elephant in the room is not AI. AI will happen, and fast enough for some of us to even notice. The elephant is the room is the consequences of applying AI at large: a complete, and overdue, revamp of our employment believes, frameworks and structures. Redefine what work really means in 21st century, revisit our engines of employment and the regulations that govern them. All that re-thinking and remaking is the elephant in the room; AI is just a trigger, albeit the strongest we ever had. It’s going to be a very interesting and century-defining next 10-15yrs!

Artificial Intelligence

AI watchdogs: do we need them?

  • 19/03/201725/03/2017

The recent advances of Artificial Intelligence (AI) and remarkable progress has caused concern, even alarm to some of the world’s best known luminaries and entrepreneurs. We’ve seen calls in the popular press for watchdogs to keep an eye on uses of AI technology for our own sanity and safety.

Is that a genuine call? Do we really need AI watchdogs? Let me unpack some of the reasoning behind this and put the topic into perspective: why AI might need a watchdog; who will operate these watchdogs and how; and whether it will make any substantial difference to silence the critics.

Watchdogs & Standards

But first, a dictionary style definition of a watchdog: “a person or organization responsible for making certain that companies obey particular standards and do not act illegally”. The key point to bear in mind here is that a watchdog is not just a monitoring and reporting function of some sort, it should have the authority and means to ensure standards are adhered to and to make sure that companies that develop AI do so in a legal manner. I think that is quite tricky to establish now, given the extremely versatile nature of AI and its applications. To understand the enormity of the task, let’s look at a similar, if not overlapping area, that of software engineering standards and practices.

Software engineering is a very mature profession with decades of practice, lessons learnt, and fine tuning the art of writing elegant software that is reliable and safe. For example, international standards are finally available, and incorporate a huge body of knowledge for software engineering (SWEBOK) which describes “generally accepted knowledge about software engineering”, it covers a variety of knowledge areas, and has been developed collaboratively with input from many practitioners and organisations from over 30 countries. Other efforts to educate and promote ethics in the practice of writing correct software emphasize the role of agreed principles which “should influence software engineers to consider broadly who is affected by their work; to examine if they and their colleagues are treating other human beings with due respect; to consider how the public, if reasonably well informed, would view their decisions; to analyze how the least empowered will be affected by their decisions; and to consider whether their acts would be judged worthy of the ideal professional working as a software engineer. In all these judgments concern for the health, safety and welfare of the public is primary”. But these efforts did not appear overnight. Software engineering standards, principles and common practice took decades of development, and trial-error to come up with a relatively condensed version of standards we have today; from over 300 standards from 50 different organisations we had 20 odd years ago.

But, even with carefully designed standards and decades long of acceptable common practice in software engineering we seem that we can’t eliminate the uncomfortable occurrence of the infamous (software) bugs. As everyone who is remotely interested in the safety of software based systems would know, getting it right is not easy: over the years, we have had numerous software disasters, even ones that caused fatalities, loss of property and value, caused wide spread disruption, and so on. And all that due to software bugs that somehow creeped through to the final production code. In most cases, the standards were followed, and to large extent the software system was deemed to be okay. But the point is not to discredit the usefulness of standards: it would have been, arguably, a lot worst without having standards to keep things in check and making sure that the software produced in an acceptable manner and would behave as expected, especially in safety critical systems (from nuclear reactors to autopilots). The point to keep in mind, as we consider following this tried and tested approach for AI systems, is that having standards will not prevent the embarrassing, and sometimes fatal, disasters we aim to avoid.

AI is different

AI also brings to the table a lot of unknowns which make it difficult to even start thinking about establishing a standard in the first place: as some of the most experienced folks in this space advocate, AI verification and validation is not easy. We could encounter issues with the brittleness of AI systems, dependencies on data and configurations which constantly change to improve from past states (a key advantage of machine learning, to constantly learn and improve its current state), we develop AI systems that are non-modular, changing anything could change everything in the system, there are known issues with privacy and security and so on.

But the one thing that appears to be the crux of the problem and concerns a lot of people is the interpretability of AI systems’ outcomes: for example, the well-known industry-backed partnership on AI clearly defines as one of its key tenets:

“ We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.”

And they are not the only ones; the UK government’s chief scientist has made similar calls: “Current approaches to liability and negligence are largely untested in this area. Asking, for example, whether an algorithm acted in the same way as a reasonable human professional would have done in the same circumstance assumes that this is an appropriate comparison to make. The algorithm may have been modelled on existing professional practice, so might meet this test by default. In some cases it may not be possible to tell that something has gone wrong, making it difficult for organisations to demonstrate they are not acting negligently or for individuals to seek redress. As the courts build experience in addressing these questions, a body of case law will develop.”

Building that case law knowledge will take time, as AI systems and their use matures and evidence from the field feeds in new data to make us understand better how to regulate AI systems. Current practice highlights that this is not easy: for example, the well-publicized, and unfortunately fatal crashes of a famous AV/EV car manufacturer, puzzle a lot of practitioners and law enforcement agencies: the interim report of one of the fatal crashes reported, points to a human driver error – as the driver did not react on time to prevent the fatal crash – but the role and functionality of the autopilot feature is at the core of this saga: did it malfunction by way of not identifying correctly the object that obstructed the vehicles’ route? It appears that the truck was cutting across the car’s path instead of driving directly in front of it, which the radar is better at detecting, and the camera-based system wasn’t trained to recognize the flat slab of a truck’s side as a threat. But even if it did, does it really matter?

The US transportation secretary voiced a warning that “drivers have a duty to take seriously their obligation to maintain control of a vehicle”. The problem appears to be in the public perception of what an autonomous vehicle can do. Marketing and commercial interests seem to have pushed out the message of what can be done rather than what can’t be done with a semi-autonomous vehicle. But this is changing now, in the light of the recent crashes and interestingly enough, if it was AI (by way of machine vision) and conventional technology (cameras and radar) that let us down, the manufacturer is bring in more AI to alleviate the situation and deliver a more robust and safe semi-autonomous driving experience.

Where do we go from here

So, any rushed initiative to put out some sort of an AI watchdog in response to our fears and misconception of the technology will, most likely, fail to deliver the anticipated results. We’d rather spend more time and effort to make sure we push out the right message about AI, what it can do and most importantly, what it can’t do. AI systems and current practice also needs to mature and reach a state where we have, at the very least, a traceable reasoning log and the system can adequately and in human terms, explain and demonstrate how it reached its decisions. Educating the public, law and policy stakeholders is critical too. As the early encounters with the law enforcement agencies in the AV/EV semi-autonomous vehicles show, there is little understanding of what the technology really does, and where you draw the line in a mixed human-machine environment; how do you identify the culprit in a symbiotic environment where is hard to tell who’s in control: human or machine? Or both?

I sense that what we need more is not a heavy handed, government-led or industry lobby-led watchdog, but commonly agreed practices, principles, protocols and common operating models that will help practitioners to deliver safe AI and beneficiaries to enjoy it. It may well be the case that once we develop a solid body of knowledge about best practices, protocols and models, we set up independent watchdogs, free from commercial influence, to make sure everyone is playing by the rules.

Artificial Intelligence

The transformative power of AI: handle with care

  • 12/03/201725/03/2017

Artificial Intelligence (AI) has become of age. Since its very early days, from the Dartmouth Conference in the mid 50s, and the early research labs in Europe and Japan , AI has grown considerably and these days enjoys widespread recognition and awe from practitioners and beneficiaries of AI technology. AI has also gone through periods of lesser excitement, with similar promises which unfortunately didn’t live up to its lofty expectations and resulted in “AI winter” periods. But today, 60 odd years since its foundations as a science and engineering discipline, despite the current hype on AI the prospect of another AI winter is remote, and AI is poised to become the engineering foundation of 21st century technology systems. The kind of autonomous, intelligent, resilient and fault tolerant systems that will serve our coming world of IoT, drones, synthetic-everything and self-driving vehicles.

Regardless of whether we will get to that utopia world sooner than later, we need to focus more resources on work to ensure we fully grasp and understand each evolutionary step of AI before we embark on to the next one. There is no doubt that, eventually, AI can deliver (fragments of) the automated world we used to read and watch in sci-fi movies. But how we get there is crucial.

Commercial push

First, there is an unprecedented boom and commercial interest (AI to become a $50B industry by 2020) for AI applications. Large technology conglomerates drive much of that interest, as the stakes are higher than ever: mastering the current state-of-practice and owning the state-of-the-art for AI gives you a unique competitive advantage over your peers. But the narrative ought to be more open and point not only to the great things that AI can do today, but also all the things that we can’t do (at least not yet)

  • Creative tasks (generic problem-solving, intuition, creativity, persuasion)
  • Adaptability (situational awareness, interpersonal interactions, emotional self-regulation, anticipation, inhibition)
  • Nearly instantaneous learning (AI systems need humongous data sets for training, on the contrary, humans deploy common sense, general knowledge and  associative cognition techniques to grasp new concepts and adapt almost instantaneously in new situations)

State of practice

Second, there is the research, engineering and academic practice of AI today: much of the focus and success stories of AI nowadays are in the machine learning (ML) sub-field of AI, with renewed interest in neural networks – NNs (and their recent incarnation, deep learning (DL). Distinctly different from the GOFAI (good old fashion AI) practice, these engineering breakthroughs owe much of their ingenuity to a clever combination of mathematics, statistical sciences and attempts to simulate human neural networks. That connectionism approach to emulate intelligence works well in certain domains but we need to consider the down side when applying these at will:

  • Known unknowns: with the current practice of deep neural networks, those that have hundreds of layers like for example the winning NN for the ImageNet image recognition competition had more than 160 layers, it is plausible to arrive at answers that although they appear correct there is no way we can comprehend and trace the answer’s origins. This is a known problem acknowledged even by pioneers in this field. This could lead in situations where AI becomes increasingly difficult to figure out how it achieves its results; not being able to explain why the answer is the way it is or simply unpack the reasoning steps that were used to derive that answer. Progressively trying to understand interactions between NNs is one way of uncovering the reasoning steps, down to the neuron level.
  • Right tool, wrong problem: today’s practice of NNs and more specifically DL and in general data science use a plethora of well-known algorithms, techniques, and engineering knowledge from statistics, maths and neuroscience- typically dozens of algorithms are available for any given problem. But choosing the right algorithm for the right problem is not an easy task. Even with the right apparatus in hand, we could still end up misinterpreting our assumptions for the real world and lead to selection bias
  • Game for the rich? Undoubtedly, NNs have achieved incredible things over the past 4-5 years, from the Jeopardy TV quiz show to AlphaGo conquering to playing Poker, and beyond. Regardless of the endless possibilities that these engineering feasts open up, we should also note that the AI systems built to achieve all that used vast resources and effort: for example the AlphaGo used 280 GPUs and 1920 CPUs; this is a lot of hardware in absolute terms; both teams at IBM and Google used dozens of the world’s brightest engineers and millions of dollars invested. Progressing AI to the next level it will almost certainly mean make it affordable for everyone, and efficient down to the level of a google search query. We are not anywhere near that yet, but initiatives for democratizing AI can certainly help move us to the right direction.
  • Give me (more) data: Current AI practice is data hungry. The more the better. And these data should be, ideally, clean of any noise that could result in contradictions at the training phase. For example, leading ML systems used millions of labelled images for identifying correctly the object at question; in contrast, a young child can use only one image. We need less data hungry and more intelligent, symbolic based input for our state of the art ML systems.

Also need to bring the symbolic AI and connectionism to work together, like the alphaGo use of tree search and other search algorithms in combination with its deep learning network.

Impact

Third, nowadays more than any other part of AI’s history, we need to worry and care about governance and protocols for AI applications. Jobs’ automation, elimination of entire processes and tasks, and even industries are possible with widespread uses of AI. Regardless of the mitigating measures to turn this around and make AI an economic boon for everyone, we need to consider how we regulate AI applications and prepare for its (future) impact:

  • Augment, not replace jobs: Automation of manual, repetitive tasks is coming, and is coming fast. And AI is main culprit. But, automating somebody’s job does not equate automating the entire job function and replacing the job holder with an AI driven machine. What we could automate is the repetitive, manual, tedious tasks of the job that occupy a significant portion of someone’s daily job. In recent survey of US occupations these tasks found to be taking up more than 50% of time spent at work

:

 

 

 

All that freed-up and capacity could then be used to do higher order cognitive tasks that we can’t automate and need human judgement. Essentially, we aim to augment jobs, not eliminate them.

  • Self-regulate: Government led, industry best practice, legal mandates, and other stipulations could provide guidance on how to bring in and use AI automation, but self-regulation could also work: use of workforce and organisational dynamics to inform best strategies for automation – when, where and how? For example, Aviva, a large UK insurer polled its employees on the impact of robots automating their jobs. This direct approach gives transparency to the process and explores execution paths in a coherent and open manner. Retraining possibilities for workers whose jobs could be automated, include writing the scripts and business logic that robotic process automation (RPA) tools will use – therefore augmenting their jobs to a higher cognitive order, not replacing them; but working alongside their RPA mechanical peers.
  • Ethics: the widespread applications areas for AI gives us an interesting puzzle: who’s going to govern AI applications and in what areas? Say, you’re concerned about critical national infrastructure, and AI technology, inadvertently impacts the cyber-defences of a nation, you naturally trying to restrict its uses and reach. But what if the same technology is used to fight back the cyber threats, and identify or cure diseases or used to provide banking services to the billions of unbanked? The decision making of ethics’ committees should be elevated beyond popular opinions from celebrated leaders, business gurus,and luminaries and based on industry, sector and application specific frameworks with input and regulatory oversight built in to ensure alignment with societal values. Dealing with future AI applications is even more challenging as more is yet to come:  for example, carefully trained AI systems could predict accurately human right trials, paving the path for a future of AI and human judges working in sync. replace human judges. In these situations we should seek to develop AI systems that move away from the binary nature of their answers and accommodate other paradigms, such as fuzzy systems or good enough answers in the context of open, semantic systems.

Clearly AI is going through a remarkable renaissance. Attention, funding, applications, practitioners, systems and engineering resources were never so many, and concentrated in its 60-odd history. But as AI becomes increasingly relevant to almost every industry and human activity we also need to be aware of the need to regulate and channel its transformative power to truly enhance human lives and experiences.

Automation

6 vulnerabilities of blockchain technology and how we will…

  • 05/03/201725/03/2017

“the future belongs to those who see possibilities before they become obvious”, T.Levitt

Blockchain technology is an exciting and fascinating topic: In its short 8 years of existence, it attracted massive attention from the world’s brainiest folks, billions of dollars in funding, spurred thousands of new companies (startups mostly), promises to turn upside down almost every industry as we know them today, and provide the foundations for the next internet, the internet of value. But is this grand vision and lofty target feasible and on target?

That’s a rather rhetorical question as it attempts to tackle a utopian situation so I’d rather focus on practical issues as I identified them from my experience and exposure to blockchain: 6 key, broadly defined, vulnerabilities that more concentrated work is need to help us come closer to the utopia described above. For the sake of brevity, I will refer to blockchain as a one-size-fits-all term, and inclusive of public blockchains (bitcoin, ethereum, etc.), privately deployed (permissioned), distributed ledgers (non-blockchain based) and shared ledgers.

Identifying vulnerabilities early in the course of a transformational technology’s journey is not necessarily a contrarian move. Rather, it helps to constructively identify areas which are vulnerable and helps practitioners focus on ways to overcome subtle problems (as for example, the vulnerabilities I identified for the visionary semantic web technology). That way, you acknowledge weaknesses early enough so you can apply remedies before stuck in fruitless explorations and waste resources and time. Technology companies have a way of tackling such situations with pivots, when they can.

Blockchain technology enjoyed a unique situational opportunity: emerged in the aftermath of the 2008 global financial crisis (GFC) it helped, as it presented itself with huge transformational potential. Unique triggers that aligned powerful forces in the global financial and geo-political ecosystem influenced and enhanced blockchains rapid trajectory: the 2008 GFC brought about a nearly catastrophic erosion of trust for banking institutions from large portions of the public, financial services institutions forced or willingly embarked on digital transformation journeys, the sharpest minded folks of the desponded pubic sought ways to take full control of their financial affairs away from the banks, great technological innovations helped with seemingly unlimited computational power and planetary scale distributed systems; all that combined led to the birth of bitcoin in 2009. And then blockchain emerged. And the rest is history. Nowadays, blockchain applications and pilots in the financial services are omnipresent, funding is in abundance and it seems that large corporates and VCs can’t have enough of the pie, and continuously invest.

But what shall we expect, ideally, from such a unique confluence of events and resources with blockchain technology: T+0 trade processing, crypto-currencies at the (digital) pocket of every individual, crypto-assets available 24/7/365 for immediate trading of any asset and transfer of value globally from anyone to anyone using sophisticated smart contracts, elimination of double books record keeping, on-demand automation, intelligent autonomous oracles regulating the flow of value through execution of smart contracts, eventually realizing DAOs (neatly engineered and governed, to avoid well known issues with first attempts in this space); and all that will ideally bring benefits to the society as a whole and tackle the huge issue with the more than 3 billion folks out there who don’t have access to banking.

Vulnerabilities

To get to that mind-boggling future, we need to do more work on:

1. (Universal) adoption 

Blockchain is great, if every market participant is using it. You need two hands to clap. It’s a network and membership should be seamless and streamlined. The bigger the network, the maximum its effect and benefits for its participants. In the capital markets, sell-side, buy-side institutions, intermediaries, exchanges and agents (retail and institutional) and ultimately regulators all need to be in it. One way or another. Blockchain has the transformation effect of “math based money” where the notion of a trusted third party is not present or needed (at least in today’s form), but the current complex financial markets ecosystem won’t change overnight. Practically it can’t, and tactically it shouldn’t.

How to remedy: As every journey starts with a first small step, blockchain adoption started already in small and concentrated pockets of the ecosystem where existing functionality can change to peer-to-peer (p2p) with relative ease; emergence of private blockchain systems helps counter initial roadblocks derived from public blockchains reliability perceptions; consortia bringing together many and diverse market participants emerged and proliferate; that helps understand dynamics, vested interests, and shared goals; these consortia could also lead to much needed standards in this space (so that everybody adheres to standard). 

2. Standards 

Capital markets participants will need to agree on standardisation across various asset classes digital representations on blockchain(s) and their underlying securities. Failing to do so in a cost-effective and timely manner, and with agreed upon practice on how to conduct, could undermine the benefits of a blockchain in the first place.

How to remedy: consortia is one way to foster collaboration and work on blueprints for blockchain(s) standards. Other ways is to get sector regulators weigh in and force principled ways of conducting business with blockchain(s). Standards take time to agree upon, mature and adhered to. But the first steps toward standardisation for the conduct of business using blockchain look promising.

3. Scalability

Current financial markets enjoy some of the most complex and scalable, operational, technology of all industries. For example, in the transactions space, the VISA network averages 2,000 transactions per second (tps) and at peak times that figure goes up to approximately 50,000 tps. By contrast, the largest public blockchain out there, bitcoin, averages 7 tps. In addition, block size limits present some interesting challenges for system architects as not much data can squeeze in the few megabytes available (ranging from average 1MB to 8MB with some clever engineering). Also, time taken for processing and validating transactions (mining) affects the throughput rate and importantly, the ordering of transactions (which could lead to undesirable double spending effects, Sybil attacks, etc.)

How to remedy: if it is only a matter of firepower, then we see that continuous work and relentless innovation produced already some dividends: Symbiont reported 87,000 tps in certain transactional circumstances, and there is ongoing work for scalable block size limits with the emerging Lighting Network. But viewing the scalability issue of blockchain(s) purely from a firepower angle is misleading; new architectures and clever ways of combining the best bits of rudimentary blockchain(s) is another way to achieve scalable, enterprise grade blockchain(s).

4. Regulations and governance 

Blockchain(s) can be a boon for regulators (as it could potentially help them prevent another 2008 style GFC, as stated in a White House hearing last year), but also a challenge. As blockchain(s) present, at least in their native form, a self-regulated network of transactions from a participants’ perspective; regulators and policy makers need to weigh in to protect the consumer from improper activity.

How to remedy: global regulators and central banks begun to notice blockchain over the past couple of years: from the high profile ones to local State regulators, from the West to the East, regulators all over the world are gearing up for the blockchain era. For example, in just a few days, SEC will deliver their verdict on the Winklevoss brothers bitcoin ETF application; if regulatory approval is granted, it could open the floodgates for retail investors and aficionados of cryptocurrencies as, as much as $300M could be invested in that ETF alone in its first week. Regulators should also take into consideration the dynamics of regulation in blockchain environments: enforce vs. make the rules. Enforcing the rules in such environment is mechanical and given by the very nature of blockchains; but making the rules defies the logic of using a blockchain in the first place! This is the “governance paradox” of blockchain.

5. Anonymity and off chain world

One of the cornerstones of capital markets trading strategies and dynamics is that buyers and sellers do not always have to reveal themselves to each other or make their commercial intentions known prior to a trade. With blockchain(s) technology, and their self-regulated, open network with the underlying immutability property for transactions, this is not easy to achieve. On the other hand, regulators and policy makers need to ensure access and traceability is technically and rightfully a feature they can rely upon to do their job.

How to remedy: the anonymity of blockchain(s), mostly public ones with proof-of-work consensus mechanism, can be easily tackled with adopting private, permissioned, blockchain(s), or even a different consensus mechanism (see, for example the excellent review of G.Samman on consensus mechanisms) where the validators are known and trusted (resembling today’s model of market intermediaries (ACHs, CSDs, etc.). Regarding the immutability property of blockchain(s), not all data need to live on the chain – there are certain circumstances where data and triggers can, and should be, off the chain: in a parametric micro-insurance context, for example, oracle data feed smart contracts to enable trigger based execution when a condition is met. But, as smart contracts will be executed independently by every node on the chain we need to guarantee that every smart contract will receive the exact same information from the oracles so that we have deterministic computation, with no inconsistencies due to lapse of network uptime or temporal unavailability of oracle data input. A workaround is to have oracles push data onto the chain – rather than smart contract pulling it in – that way we guarantee that every node sees the same data. Hence, oracles (and other data and business logic structures) can live off the chain and feed in the on-chain components on demand or as per protocol design.

6. Switching cost 

Adopting blockchain(s) is not a weekend system upgrade before markets open again on a Monday. Even the simplest POC and pilots take weeks to come to fruition (excluding the “quick and dirty” hacks done is a few hours which do not intent to produce enterprise ready production systems), let alone intentional systems’ replacement which can take years to materialize. There is plenty of reports and market research out there pointing to billions of dollars in cost savings as a benefit of adopting blockchain(s). But there is little evidence and reporting on what the switching cost is or even how to quantify it.

How to remedy: practically speaking, we need to wait a few more years until we complete cycles of switching large, core, financial market infrastructure systems to blockchain ones. Early work points to clever ways of preparing such a switch (for example, BNYM’s BDS 360 system), but we also need to understand what works and what doesn’t (see, for example Deutsche Bank’s commentary on the bitcoin use failing to eliminate intermediaries). General guidance and sharing experiences also helps, and we need more of that.

The road to adolescence

Arguably, these are not insurmountable obstacles, with all the attention and resources devoted to blockchain technology development it’s only a matter of a few years before we have solid blockchain systems underpinning the global financial services ecosystem. But in other industries blockchain has, arguably, more immediate impact potential with things like supply chain management, ID resilient management, records management (healthcare, personal, government, real estate, etc.), insurance (parametric, contextualized, micro, p2p) and other initial explorations. As blockchain applications progressively mature in other industries, the great catalyst that will bring about fundamental change will arrive: Consumer. A great catalyst by their sheer volume, when consumers start using blockchains(s) and services build on them, we will experience impact equivalent, at least, to the arrival of e-Commerce on the web and growth of social media.

But the current focus should also shift from enhancing existing mechanics of blockchain(s), like getting higher throughput for tps, bigger block sizes, better consensus mechanisms, flexible governance protocols, etc. – to bring in new techniques and knowledge from other fields and practices. For example, smart contracts current work could benefit by a solid body of knowledge and work done in AI with autonomous multi-agent systems (think here, agents as in smart contracts on blockchains), especially in automated coalition formations and mechanised trust protocols. Industry practitioners could benefit from engagements with academia, as we clearly witness in the engagement, for example, of Barclays and Imperial College London for smart contracts research. One would observe that unlike other transformational technologies, blockchain had little to do with academic research and development, initially at least. Almost a “not invented here” syndrome kept the best academic labs muted in the first few years whilst much of the work done was driven by communities of practitioners. But this started to change lately, with world class research hubs forging partnerships and setting up blockchain specific labs.

Transformational technologies take time to pay back heavy dividends. It’s a long and impactful journey, not a quick sprint with a shocking effect. The pace of substituting old technology with new is very dependent on ecosystems, and we see that blockchain is doing well on that front. Most impactful innovations arrive at the end of the hype cycle and tend to stay with us for a long time. Blockchain technology is getting there and will keep us busy in its journey to adolescence.

Recent Posts

  • Blockcain and privacy considerations
  • To regulate or not? How should governments react to the AI revolution?
  • Blockchain for business

Categories

  • AI ethics
  • Artificial Intelligence
  • Automation
  • Autonomous Vehicles
  • Blockchain
  • Innovation
  • Machine Learning
  • Policies
  • Robots
  • Uncategorized
Yannis Kalfoglou @2011-2023
Theme by Colorlib Powered by WordPress