Skip to content
kalfoglou.info
  • Musings
  • Speaking
Artificial Intelligence

The future is already upon us

  • 10/02/201910/02/2019

I was interviewed for an extensive piece on AI and Blockchain at the leading Greek diaspora publication, Business File. The piece takes a personal journey spin from my early days of developing AI for software development intelligent assistants all the way to recent gigs on AI strategy and business blockchains. As the interviewer, Mrs Anastassiou eloquently observes:


As AI has already crossed our thershold for good, Dr Yannis Kalfoglou – having worked for over 28 years in the field – is certain of the symbiotic relationship between machines and humans. He defends the merits of blockchain technology in the financial and business world, and presents cryptocurrencies, predicting that by 2030 there will be an “AI arms race” between US and China

I pointed out that a lot of the progress in recent AI breakthroughs is related to the adundance of resources to onboard AI and try new things: you can get (mostly) free education and training on the newest machine learning models and techniques, in predominantly neural networks – which are a set of algorithms, modelled loosely on the human brain, designed to recognise patterns. You can also get (mostly) free software to build and run your models, backed by relatively inexpensive infrastructure; you can use them in the real world; in a matter of days – not years!

There was a lot of discussion on the hot topic of machine vs. human. On the question as to whether AI could become as effective as human brainpower, I posit that it boils down to what we mean by “effective”. If we assume that AI is this kind of super-intelligent machine that can think like a human, has emotional intelligence, creativity, empathy, and is able to exhibit soft intelligence features, as well as all sorts of non-mathematical traits, then I would safely say that we might never see that!. However, if by “effective” we mean an AI system which possesses immense computing capabilities and can perform better than humans in certain well degined tasks – which deal mainly with computation, large data sets, arithmetic, pattern matching, and general brute force computing – then I would say that we are already experiencing this phenomenon and we will see more of it!

My main thesis on the topic of business applications of AI is that AI has to be seen as a Swiss army knife and its uses vary across the organisation: for example, (a) AI can boost the customer experience; shop floor events, virtual changing rooms, customer analytics, identifying and promoting the right products for the right customer, pro-active and automatic support using ChatBots, and many other means; (b) business interaction, predictive and prescriptive analytics, which help manage suppliers, vendor and product lines, fraud detection systems to rule out fraudulent transactions, automatic supply chain functions, from inventory control to order fulfilment; (c) regulatory compliance, automatic reporting systems, compliance checking systems; (d) new product development, identify and launch new products, in segments not previously active, using analytics and AI systems to predict possible revenue volumes, cost-effective marketing strategies, and real time monitoring of products and services in the field; (e) new model interfaces with customer, clients and employees: ChatBots, image recognition systems, digital ID and biometics.

There are, understandably, concerns about the symbiosis of human-machine systems especially when the autonomous systems of today’s labs and blueprints will become the norm in a few years time. Regulation and control frameworks could be one way to ensure maximum safety for humans, but any rushed initiative to control AI in response to our fears and misconception of the technology will be highly misguided. AI systems and current practice need to reach a state where we can understand and evaluate the reasoning behind the system, and rationalise it in human terms. It may well be the case, that once we develop a solid body of knowledge about best practices, protocols, and models, we can set up a workable international system of rules and regulations governing AI.

Artificial Intelligence

AI’s impact on jobs: (not) the elephant in the…

  • 26/03/2017

 AI driven automation is happening. It benefits the most and it causes anxiety, even fear, to many more. It has captured the imagination of many futurists and occupies prime time in the diaries of core global decision makers at governments, large corporations, policy makers, and every individual. But there is subtle question that bothers me about all this: is AI the elephant in the room? Is AI that out-of-the-ordinary phenomenon that nobody acknowledges, or willing to acknowledge, that finally matured after 60 odd years of hard work and is now capable of automating millions of jobs worldwide that could render tens of millions of people unemployed overnight? Or is it something else that we fail to acknowledge or even grasp?

AI driven automation is here to stay

Undoubtedly, AI is commercially attractive. There are a lot benefits of applying AI driven technologies to automate routine, mundane, repetitive tasks. And of course, that automation is commercially attractive as it can free up time from full time employees (FTEs) to concentrate on higher order tasks or redeploy them to do other things and increase the output and profitability of the business. At least that’s the narrative, the nice-picture outcome. But there is also the ugly picture outcome: AI driven automation is not handled properly, lack of planning or ever resources could lead to no redeployment of roles or there are no higher order tasks to do, and hence leads to straightforward replacement and job losses. That’s the core of the jobs’ fear narrative from some of the sceptics of AI driven automation technology.

If we look at the evolution of the corporate world over the past 20-30 years a lot has changed, and the pace of change is continuing fast. As McGowan points out in her excellent informative piece: “the purpose of the company has changed from one that aggregated work effort in order to optimize productivity and create value for customers to one that aggregates profitability in order to create value for shareholders”  and this has had a profound impact on the treatment of material resources in the 21st century where the intangible products take precedence over material (for example, software vs. brick-n-mortar, and the millennials’ values for shareable use of assets vs. ownership). It has caused to: “shifted the workforce from an asset to develop to a cost to contain as companies created more and more financial value with fewer and fewer humans”

The picture on the left, worth a thousand worlds (as famous ancestors would say): In a space of less than 30 years, leading software companies in the States produced 40x market cap over traditional manufacturing companies with 10x fewer FTEs.

 

Treating your FTEs purely as a cost though, can have profound impact on the future of employment in the face of AI-driven automation, as I will analyse further down. But first, let’s also have a look at the change in jobs’ requirements over the past 30-40 years. It appears that by 2014 over 90 million jobs in the States involved some sort of cognitive intensive function. That almost doubled since early 1980s whereas routine jobs, manual and non-manual, experienced a less impressive growth. 

The growth of cognitive based jobs is not a surprise given the knowledge economies we have today and it also provides an easier understanding on where and how to apply AI driven automation at large: AI is good when we have jobs with frequently high volume tasks, that are of repetitive nature and can be, ideally, codified (e.g., we can express them in a way that machine understands what to do without intuition or human supervision). AI is also good when we have pure brute-force situations, such as speed of calculations, memory capacity, consistency, lack of fatigue. On the other hand, the cognitive intensive tasks that humans perform better, involve soft skills, negotiation, persuasion, situational awareness, cultural sensitivity, historical context, emotional intelligence, problem solving, intuition, empathy, creativity and so on. And it appears that these skills are not only needed for the knowledge intensive job environment of today, but are also highly valued as a supply-demand market making force: the growth of skilled craftsmanship  in core markets, and importantly the shift in public perception and demand for such products is evident, as people turn to hand crafted, artistic and human produced, products, over mass produced, mechanised clones. That’s an interesting trend to watch in the era of full automation and over-supply of mass produced artefacts.

AI driven automation can take its toll even on the producers of such products. The research field of program synthesis (in layman’s jargon, code writing code) is not new, but the recent advances in machine learning, bring a new force to the table: DeepCoder uses a technique that creates new programs by piecing together lines of code taken from existing software – just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall. One advantage of letting an AI loose in this way is that it can search more thoroughly and widely than a human coder, so could piece together source code in a way humans may not have thought of. DeepCoder uses machine learning to scour databases of source code and sort the fragments according to its view of their probable usefulness.

However, the creators of such endeavours are quick to dismiss the possibility of using such techniques to replace human programmers. Rather, it could take away the mundane, repetitive parts and human coders will focus on sophisticated, higher order tasks.

So, there is a pattern emerging from all the narrative about AI driven automation and potential impact on jobs: AI automation of jobs will happen, one way or another, jobs will be impacted, but not lost; rather, they will be redeployed to do higher order tasks, focus on parts of the job which are denser and require cognitive skills that machine don’t possess. All that is good, in theory, but one thing that becomes more and more unclear as the narrative goes and gains traction is: which are these higher order tasks that so many millions of job holders, globally, will be redeployed to do? And how come we haven’t managed to fill up those high order tasks and jobs all these years and only now that finally AI automation is here, we seriously consider them? To tackle these questions, let’s have a look at the nature of employment and jobs:

Revisiting the notion of employment

It’s not only the biological limitations of humans to perform certain tasks at speed (that machine can do faster) that justifies the move to automation. One should look also at the quality of work, employee engagement, sentiment, motivations, etc. It appears that in the wake of 21st century employee engagement and motivation at work is a big issue

  • valuation of knowledge contributions: One of the reasons for this seemingly mismatch in people’s aspirations with the offerings from their job environment is a deeply rooted system of managing employment: it hasn’t changed much over the past century or so. As Fumagalli points out: “The crisis of the labor theory of value derives from the fact that the individual contribution today is not measurable and the output tends to escape a unit of measurement, as production tends to become immaterial.” As we have moved with full speed to the technological abundance of 21st century, it appears that the core deliverables of job holders should no longer be solely measured in terms of materials as a lot work today is immaterial, knowledge-driven, network intensive, and soft skills dependent. Also, interestingly enough, as far human nature is concerned, knowledge and networking is theoretically unlimited so the principle of scarcity that underpins supply and demand no longer holds. Job holders, can and give unlimited value to their employers which go unmeasured. It appears that we need to consider different systems for rewarding contributions as “the only theory of value that appears adequate to contemporary bio-cognitive capitalism, the labor theory of value, is not able to provide one measure.”
  • employee regulations: Most of our labour laws, management regimes, and etiquette were designed and applied at large at the beginning of the last century. They are no longer serving the versatile and dynamic nature of employment today: the one single education stream, one job for life, one pension pot, doesn’t hold true today. Technology is the main culprit for this mismatch: we developed and adopted paradigm shift technologies faster than we can re-think, and re-design our employment systems. Technology gives us a relatively easy landing to the versatile, and ultimately rewarding “gig economy”, yet somehow, we are still struggling to serve that growing sector of our economies with the right laws, frameworks and protocols to make sure that salaried employees and freelancers are treated equally.
  • lifelong learning: As we can no longer apply 20th century practices to meet 21st century demands, employees and job holders need to continuously learn and develop new skills. Experience pays a lot but old knowledge can become obsolete faster than new knowledge is produced and applied in the work environment (e.g., the bizarre situation with COBOL written systems in the financial services is telling: there are not enough COBOL experts left to maintain and change them). Knowing what is new and how it could be applied will be more important. Government think tanks are painting a pretty convincing picture for the future of employees: “Be willing to jump across specialist knowledge boundaries as technologies and disciplines converge, developing a blend of technical training and ‘softer’, collaborative skills.” Making these transitions to other areas of the business is not going to be easy for some, or even feasible given the daily routine of taking care of the business (business-as-usual). So, the AI assault and automation brought by machines, could in theory, free up time and allow employees to learn new things and make the transition, as long as AI jobs automation will lead to redeployment of roles rather than replacement.
  • The demise of a large company? Although distant in the future, and rather provocative as a thought, AI driven automation, employee upskilling, job market pressures, growth of gig economy, new principles and believes of millennials (the largest demographic cohort in today’s work environment and will make up to 60% by 2030) all combined could contribute to the potential demise of large companies as we know them today: global multinationals, with tens and even hundreds of thousands of employees, multibillion dollar revenues but extremely slow to react and averse to risk taking. As the battle for talent continuous relentlessly, and automation flattens the bottom of the pyramid in a job hierarchy, large corporations will struggle to justify huge populations of employees in their payrolls. The capable and lucky ones that manage to redeploy large populations across different functions, could maintain some status of a large company, but most will fail. It appears that a new world order could emerge: one where for each of today’s mega corporations there will be hundreds smaller ones emerge, each one specializing in core function and competencies we typically see inside large corporations. Akin to the practice of outsourcing, this new world order will re-define the boundaries between corporations, shared functions (from marketing and finance to production floors) will become separate companies serving today’ competitors. And the battle for differentiation, market share and standing out will move at a higher order, not on the production floor (in a metaphorical sense, from manufacturing production floors to soft-skilled, knowledge intensive productions); and focus on the core competencies of a corporation: quality, expertise, craftsmanship, customer care, etc.

So, is AI the elephant in the room? 

Let’s revisit the initial probing question: AI could indeed be the elephant in the room; all the signs are there, meteoric growth, transcends industries and sectors, remarkable results compared to human level intelligence, abundance of hardware and software resources to conduct AI at a large scale (with more work to do there), and finally a society that is warmed up enough to the notion of AI as “business-as-usual”. But, my hunch is that the true, and big, elephant in the room is not AI. AI will happen, and fast enough for some of us to even notice. The elephant is the room is the consequences of applying AI at large: a complete, and overdue, revamp of our employment believes, frameworks and structures. Redefine what work really means in 21st century, revisit our engines of employment and the regulations that govern them. All that re-thinking and remaking is the elephant in the room; AI is just a trigger, albeit the strongest we ever had. It’s going to be a very interesting and century-defining next 10-15yrs!

Artificial Intelligence

AI watchdogs: do we need them?

  • 19/03/201725/03/2017

The recent advances of Artificial Intelligence (AI) and remarkable progress has caused concern, even alarm to some of the world’s best known luminaries and entrepreneurs. We’ve seen calls in the popular press for watchdogs to keep an eye on uses of AI technology for our own sanity and safety.

Is that a genuine call? Do we really need AI watchdogs? Let me unpack some of the reasoning behind this and put the topic into perspective: why AI might need a watchdog; who will operate these watchdogs and how; and whether it will make any substantial difference to silence the critics.

Watchdogs & Standards

But first, a dictionary style definition of a watchdog: “a person or organization responsible for making certain that companies obey particular standards and do not act illegally”. The key point to bear in mind here is that a watchdog is not just a monitoring and reporting function of some sort, it should have the authority and means to ensure standards are adhered to and to make sure that companies that develop AI do so in a legal manner. I think that is quite tricky to establish now, given the extremely versatile nature of AI and its applications. To understand the enormity of the task, let’s look at a similar, if not overlapping area, that of software engineering standards and practices.

Software engineering is a very mature profession with decades of practice, lessons learnt, and fine tuning the art of writing elegant software that is reliable and safe. For example, international standards are finally available, and incorporate a huge body of knowledge for software engineering (SWEBOK) which describes “generally accepted knowledge about software engineering”, it covers a variety of knowledge areas, and has been developed collaboratively with input from many practitioners and organisations from over 30 countries. Other efforts to educate and promote ethics in the practice of writing correct software emphasize the role of agreed principles which “should influence software engineers to consider broadly who is affected by their work; to examine if they and their colleagues are treating other human beings with due respect; to consider how the public, if reasonably well informed, would view their decisions; to analyze how the least empowered will be affected by their decisions; and to consider whether their acts would be judged worthy of the ideal professional working as a software engineer. In all these judgments concern for the health, safety and welfare of the public is primary”. But these efforts did not appear overnight. Software engineering standards, principles and common practice took decades of development, and trial-error to come up with a relatively condensed version of standards we have today; from over 300 standards from 50 different organisations we had 20 odd years ago.

But, even with carefully designed standards and decades long of acceptable common practice in software engineering we seem that we can’t eliminate the uncomfortable occurrence of the infamous (software) bugs. As everyone who is remotely interested in the safety of software based systems would know, getting it right is not easy: over the years, we have had numerous software disasters, even ones that caused fatalities, loss of property and value, caused wide spread disruption, and so on. And all that due to software bugs that somehow creeped through to the final production code. In most cases, the standards were followed, and to large extent the software system was deemed to be okay. But the point is not to discredit the usefulness of standards: it would have been, arguably, a lot worst without having standards to keep things in check and making sure that the software produced in an acceptable manner and would behave as expected, especially in safety critical systems (from nuclear reactors to autopilots). The point to keep in mind, as we consider following this tried and tested approach for AI systems, is that having standards will not prevent the embarrassing, and sometimes fatal, disasters we aim to avoid.

AI is different

AI also brings to the table a lot of unknowns which make it difficult to even start thinking about establishing a standard in the first place: as some of the most experienced folks in this space advocate, AI verification and validation is not easy. We could encounter issues with the brittleness of AI systems, dependencies on data and configurations which constantly change to improve from past states (a key advantage of machine learning, to constantly learn and improve its current state), we develop AI systems that are non-modular, changing anything could change everything in the system, there are known issues with privacy and security and so on.

But the one thing that appears to be the crux of the problem and concerns a lot of people is the interpretability of AI systems’ outcomes: for example, the well-known industry-backed partnership on AI clearly defines as one of its key tenets:

“ We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.”

And they are not the only ones; the UK government’s chief scientist has made similar calls: “Current approaches to liability and negligence are largely untested in this area. Asking, for example, whether an algorithm acted in the same way as a reasonable human professional would have done in the same circumstance assumes that this is an appropriate comparison to make. The algorithm may have been modelled on existing professional practice, so might meet this test by default. In some cases it may not be possible to tell that something has gone wrong, making it difficult for organisations to demonstrate they are not acting negligently or for individuals to seek redress. As the courts build experience in addressing these questions, a body of case law will develop.”

Building that case law knowledge will take time, as AI systems and their use matures and evidence from the field feeds in new data to make us understand better how to regulate AI systems. Current practice highlights that this is not easy: for example, the well-publicized, and unfortunately fatal crashes of a famous AV/EV car manufacturer, puzzle a lot of practitioners and law enforcement agencies: the interim report of one of the fatal crashes reported, points to a human driver error – as the driver did not react on time to prevent the fatal crash – but the role and functionality of the autopilot feature is at the core of this saga: did it malfunction by way of not identifying correctly the object that obstructed the vehicles’ route? It appears that the truck was cutting across the car’s path instead of driving directly in front of it, which the radar is better at detecting, and the camera-based system wasn’t trained to recognize the flat slab of a truck’s side as a threat. But even if it did, does it really matter?

The US transportation secretary voiced a warning that “drivers have a duty to take seriously their obligation to maintain control of a vehicle”. The problem appears to be in the public perception of what an autonomous vehicle can do. Marketing and commercial interests seem to have pushed out the message of what can be done rather than what can’t be done with a semi-autonomous vehicle. But this is changing now, in the light of the recent crashes and interestingly enough, if it was AI (by way of machine vision) and conventional technology (cameras and radar) that let us down, the manufacturer is bring in more AI to alleviate the situation and deliver a more robust and safe semi-autonomous driving experience.

Where do we go from here

So, any rushed initiative to put out some sort of an AI watchdog in response to our fears and misconception of the technology will, most likely, fail to deliver the anticipated results. We’d rather spend more time and effort to make sure we push out the right message about AI, what it can do and most importantly, what it can’t do. AI systems and current practice also needs to mature and reach a state where we have, at the very least, a traceable reasoning log and the system can adequately and in human terms, explain and demonstrate how it reached its decisions. Educating the public, law and policy stakeholders is critical too. As the early encounters with the law enforcement agencies in the AV/EV semi-autonomous vehicles show, there is little understanding of what the technology really does, and where you draw the line in a mixed human-machine environment; how do you identify the culprit in a symbiotic environment where is hard to tell who’s in control: human or machine? Or both?

I sense that what we need more is not a heavy handed, government-led or industry lobby-led watchdog, but commonly agreed practices, principles, protocols and common operating models that will help practitioners to deliver safe AI and beneficiaries to enjoy it. It may well be the case that once we develop a solid body of knowledge about best practices, protocols and models, we set up independent watchdogs, free from commercial influence, to make sure everyone is playing by the rules.

Artificial Intelligence

The transformative power of AI: handle with care

  • 12/03/201725/03/2017

Artificial Intelligence (AI) has become of age. Since its very early days, from the Dartmouth Conference in the mid 50s, and the early research labs in Europe and Japan , AI has grown considerably and these days enjoys widespread recognition and awe from practitioners and beneficiaries of AI technology. AI has also gone through periods of lesser excitement, with similar promises which unfortunately didn’t live up to its lofty expectations and resulted in “AI winter” periods. But today, 60 odd years since its foundations as a science and engineering discipline, despite the current hype on AI the prospect of another AI winter is remote, and AI is poised to become the engineering foundation of 21st century technology systems. The kind of autonomous, intelligent, resilient and fault tolerant systems that will serve our coming world of IoT, drones, synthetic-everything and self-driving vehicles.

Regardless of whether we will get to that utopia world sooner than later, we need to focus more resources on work to ensure we fully grasp and understand each evolutionary step of AI before we embark on to the next one. There is no doubt that, eventually, AI can deliver (fragments of) the automated world we used to read and watch in sci-fi movies. But how we get there is crucial.

Commercial push

First, there is an unprecedented boom and commercial interest (AI to become a $50B industry by 2020) for AI applications. Large technology conglomerates drive much of that interest, as the stakes are higher than ever: mastering the current state-of-practice and owning the state-of-the-art for AI gives you a unique competitive advantage over your peers. But the narrative ought to be more open and point not only to the great things that AI can do today, but also all the things that we can’t do (at least not yet)

  • Creative tasks (generic problem-solving, intuition, creativity, persuasion)
  • Adaptability (situational awareness, interpersonal interactions, emotional self-regulation, anticipation, inhibition)
  • Nearly instantaneous learning (AI systems need humongous data sets for training, on the contrary, humans deploy common sense, general knowledge and  associative cognition techniques to grasp new concepts and adapt almost instantaneously in new situations)

State of practice

Second, there is the research, engineering and academic practice of AI today: much of the focus and success stories of AI nowadays are in the machine learning (ML) sub-field of AI, with renewed interest in neural networks – NNs (and their recent incarnation, deep learning (DL). Distinctly different from the GOFAI (good old fashion AI) practice, these engineering breakthroughs owe much of their ingenuity to a clever combination of mathematics, statistical sciences and attempts to simulate human neural networks. That connectionism approach to emulate intelligence works well in certain domains but we need to consider the down side when applying these at will:

  • Known unknowns: with the current practice of deep neural networks, those that have hundreds of layers like for example the winning NN for the ImageNet image recognition competition had more than 160 layers, it is plausible to arrive at answers that although they appear correct there is no way we can comprehend and trace the answer’s origins. This is a known problem acknowledged even by pioneers in this field. This could lead in situations where AI becomes increasingly difficult to figure out how it achieves its results; not being able to explain why the answer is the way it is or simply unpack the reasoning steps that were used to derive that answer. Progressively trying to understand interactions between NNs is one way of uncovering the reasoning steps, down to the neuron level.
  • Right tool, wrong problem: today’s practice of NNs and more specifically DL and in general data science use a plethora of well-known algorithms, techniques, and engineering knowledge from statistics, maths and neuroscience- typically dozens of algorithms are available for any given problem. But choosing the right algorithm for the right problem is not an easy task. Even with the right apparatus in hand, we could still end up misinterpreting our assumptions for the real world and lead to selection bias
  • Game for the rich? Undoubtedly, NNs have achieved incredible things over the past 4-5 years, from the Jeopardy TV quiz show to AlphaGo conquering to playing Poker, and beyond. Regardless of the endless possibilities that these engineering feasts open up, we should also note that the AI systems built to achieve all that used vast resources and effort: for example the AlphaGo used 280 GPUs and 1920 CPUs; this is a lot of hardware in absolute terms; both teams at IBM and Google used dozens of the world’s brightest engineers and millions of dollars invested. Progressing AI to the next level it will almost certainly mean make it affordable for everyone, and efficient down to the level of a google search query. We are not anywhere near that yet, but initiatives for democratizing AI can certainly help move us to the right direction.
  • Give me (more) data: Current AI practice is data hungry. The more the better. And these data should be, ideally, clean of any noise that could result in contradictions at the training phase. For example, leading ML systems used millions of labelled images for identifying correctly the object at question; in contrast, a young child can use only one image. We need less data hungry and more intelligent, symbolic based input for our state of the art ML systems.

Also need to bring the symbolic AI and connectionism to work together, like the alphaGo use of tree search and other search algorithms in combination with its deep learning network.

Impact

Third, nowadays more than any other part of AI’s history, we need to worry and care about governance and protocols for AI applications. Jobs’ automation, elimination of entire processes and tasks, and even industries are possible with widespread uses of AI. Regardless of the mitigating measures to turn this around and make AI an economic boon for everyone, we need to consider how we regulate AI applications and prepare for its (future) impact:

  • Augment, not replace jobs: Automation of manual, repetitive tasks is coming, and is coming fast. And AI is main culprit. But, automating somebody’s job does not equate automating the entire job function and replacing the job holder with an AI driven machine. What we could automate is the repetitive, manual, tedious tasks of the job that occupy a significant portion of someone’s daily job. In recent survey of US occupations these tasks found to be taking up more than 50% of time spent at work

:

 

 

 

All that freed-up and capacity could then be used to do higher order cognitive tasks that we can’t automate and need human judgement. Essentially, we aim to augment jobs, not eliminate them.

  • Self-regulate: Government led, industry best practice, legal mandates, and other stipulations could provide guidance on how to bring in and use AI automation, but self-regulation could also work: use of workforce and organisational dynamics to inform best strategies for automation – when, where and how? For example, Aviva, a large UK insurer polled its employees on the impact of robots automating their jobs. This direct approach gives transparency to the process and explores execution paths in a coherent and open manner. Retraining possibilities for workers whose jobs could be automated, include writing the scripts and business logic that robotic process automation (RPA) tools will use – therefore augmenting their jobs to a higher cognitive order, not replacing them; but working alongside their RPA mechanical peers.
  • Ethics: the widespread applications areas for AI gives us an interesting puzzle: who’s going to govern AI applications and in what areas? Say, you’re concerned about critical national infrastructure, and AI technology, inadvertently impacts the cyber-defences of a nation, you naturally trying to restrict its uses and reach. But what if the same technology is used to fight back the cyber threats, and identify or cure diseases or used to provide banking services to the billions of unbanked? The decision making of ethics’ committees should be elevated beyond popular opinions from celebrated leaders, business gurus,and luminaries and based on industry, sector and application specific frameworks with input and regulatory oversight built in to ensure alignment with societal values. Dealing with future AI applications is even more challenging as more is yet to come:  for example, carefully trained AI systems could predict accurately human right trials, paving the path for a future of AI and human judges working in sync. replace human judges. In these situations we should seek to develop AI systems that move away from the binary nature of their answers and accommodate other paradigms, such as fuzzy systems or good enough answers in the context of open, semantic systems.

Clearly AI is going through a remarkable renaissance. Attention, funding, applications, practitioners, systems and engineering resources were never so many, and concentrated in its 60-odd history. But as AI becomes increasingly relevant to almost every industry and human activity we also need to be aware of the need to regulate and channel its transformative power to truly enhance human lives and experiences.

Recent Posts

  • To regulate or not? How should governments react to the AI revolution?
  • Blockchain for business
  • The future is already upon us

Categories

  • AI ethics
  • Artificial Intelligence
  • Automation
  • Autonomous Vehicles
  • Blockchain
  • Innovation
  • Machine Learning
  • Policies
  • Robots
  • Uncategorized
Theme by Colorlib Powered by WordPress
  • Linkedin
  • TW