Skip to content
Dr Yannis Kalfoglou
  • Musings
  • Speaking
Artificial Intelligence

AI watchdogs: do we need them?

  • 19/03/201725/03/2017

The recent advances of Artificial Intelligence (AI) and remarkable progress has caused concern, even alarm to some of the world’s best known luminaries and entrepreneurs. We’ve seen calls in the popular press for watchdogs to keep an eye on uses of AI technology for our own sanity and safety.

Is that a genuine call? Do we really need AI watchdogs? Let me unpack some of the reasoning behind this and put the topic into perspective: why AI might need a watchdog; who will operate these watchdogs and how; and whether it will make any substantial difference to silence the critics.

Watchdogs & Standards

But first, a dictionary style definition of a watchdog: “a person or organization responsible for making certain that companies obey particular standards and do not act illegally”. The key point to bear in mind here is that a watchdog is not just a monitoring and reporting function of some sort, it should have the authority and means to ensure standards are adhered to and to make sure that companies that develop AI do so in a legal manner. I think that is quite tricky to establish now, given the extremely versatile nature of AI and its applications. To understand the enormity of the task, let’s look at a similar, if not overlapping area, that of software engineering standards and practices.

Software engineering is a very mature profession with decades of practice, lessons learnt, and fine tuning the art of writing elegant software that is reliable and safe. For example, international standards are finally available, and incorporate a huge body of knowledge for software engineering (SWEBOK) which describes “generally accepted knowledge about software engineering”, it covers a variety of knowledge areas, and has been developed collaboratively with input from many practitioners and organisations from over 30 countries. Other efforts to educate and promote ethics in the practice of writing correct software emphasize the role of agreed principles which “should influence software engineers to consider broadly who is affected by their work; to examine if they and their colleagues are treating other human beings with due respect; to consider how the public, if reasonably well informed, would view their decisions; to analyze how the least empowered will be affected by their decisions; and to consider whether their acts would be judged worthy of the ideal professional working as a software engineer. In all these judgments concern for the health, safety and welfare of the public is primary”. But these efforts did not appear overnight. Software engineering standards, principles and common practice took decades of development, and trial-error to come up with a relatively condensed version of standards we have today; from over 300 standards from 50 different organisations we had 20 odd years ago.

But, even with carefully designed standards and decades long of acceptable common practice in software engineering we seem that we can’t eliminate the uncomfortable occurrence of the infamous (software) bugs. As everyone who is remotely interested in the safety of software based systems would know, getting it right is not easy: over the years, we have had numerous software disasters, even ones that caused fatalities, loss of property and value, caused wide spread disruption, and so on. And all that due to software bugs that somehow creeped through to the final production code. In most cases, the standards were followed, and to large extent the software system was deemed to be okay. But the point is not to discredit the usefulness of standards: it would have been, arguably, a lot worst without having standards to keep things in check and making sure that the software produced in an acceptable manner and would behave as expected, especially in safety critical systems (from nuclear reactors to autopilots). The point to keep in mind, as we consider following this tried and tested approach for AI systems, is that having standards will not prevent the embarrassing, and sometimes fatal, disasters we aim to avoid.

AI is different

AI also brings to the table a lot of unknowns which make it difficult to even start thinking about establishing a standard in the first place: as some of the most experienced folks in this space advocate, AI verification and validation is not easy. We could encounter issues with the brittleness of AI systems, dependencies on data and configurations which constantly change to improve from past states (a key advantage of machine learning, to constantly learn and improve its current state), we develop AI systems that are non-modular, changing anything could change everything in the system, there are known issues with privacy and security and so on.

But the one thing that appears to be the crux of the problem and concerns a lot of people is the interpretability of AI systems’ outcomes: for example, the well-known industry-backed partnership on AI clearly defines as one of its key tenets:

“ We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.”

And they are not the only ones; the UK government’s chief scientist has made similar calls: “Current approaches to liability and negligence are largely untested in this area. Asking, for example, whether an algorithm acted in the same way as a reasonable human professional would have done in the same circumstance assumes that this is an appropriate comparison to make. The algorithm may have been modelled on existing professional practice, so might meet this test by default. In some cases it may not be possible to tell that something has gone wrong, making it difficult for organisations to demonstrate they are not acting negligently or for individuals to seek redress. As the courts build experience in addressing these questions, a body of case law will develop.”

Building that case law knowledge will take time, as AI systems and their use matures and evidence from the field feeds in new data to make us understand better how to regulate AI systems. Current practice highlights that this is not easy: for example, the well-publicized, and unfortunately fatal crashes of a famous AV/EV car manufacturer, puzzle a lot of practitioners and law enforcement agencies: the interim report of one of the fatal crashes reported, points to a human driver error – as the driver did not react on time to prevent the fatal crash – but the role and functionality of the autopilot feature is at the core of this saga: did it malfunction by way of not identifying correctly the object that obstructed the vehicles’ route? It appears that the truck was cutting across the car’s path instead of driving directly in front of it, which the radar is better at detecting, and the camera-based system wasn’t trained to recognize the flat slab of a truck’s side as a threat. But even if it did, does it really matter?

The US transportation secretary voiced a warning that “drivers have a duty to take seriously their obligation to maintain control of a vehicle”. The problem appears to be in the public perception of what an autonomous vehicle can do. Marketing and commercial interests seem to have pushed out the message of what can be done rather than what can’t be done with a semi-autonomous vehicle. But this is changing now, in the light of the recent crashes and interestingly enough, if it was AI (by way of machine vision) and conventional technology (cameras and radar) that let us down, the manufacturer is bring in more AI to alleviate the situation and deliver a more robust and safe semi-autonomous driving experience.

Where do we go from here

So, any rushed initiative to put out some sort of an AI watchdog in response to our fears and misconception of the technology will, most likely, fail to deliver the anticipated results. We’d rather spend more time and effort to make sure we push out the right message about AI, what it can do and most importantly, what it can’t do. AI systems and current practice also needs to mature and reach a state where we have, at the very least, a traceable reasoning log and the system can adequately and in human terms, explain and demonstrate how it reached its decisions. Educating the public, law and policy stakeholders is critical too. As the early encounters with the law enforcement agencies in the AV/EV semi-autonomous vehicles show, there is little understanding of what the technology really does, and where you draw the line in a mixed human-machine environment; how do you identify the culprit in a symbiotic environment where is hard to tell who’s in control: human or machine? Or both?

I sense that what we need more is not a heavy handed, government-led or industry lobby-led watchdog, but commonly agreed practices, principles, protocols and common operating models that will help practitioners to deliver safe AI and beneficiaries to enjoy it. It may well be the case that once we develop a solid body of knowledge about best practices, protocols and models, we set up independent watchdogs, free from commercial influence, to make sure everyone is playing by the rules.

Artificial Intelligence

The transformative power of AI: handle with care

  • 12/03/201725/03/2017

Artificial Intelligence (AI) has become of age. Since its very early days, from the Dartmouth Conference in the mid 50s, and the early research labs in Europe and Japan , AI has grown considerably and these days enjoys widespread recognition and awe from practitioners and beneficiaries of AI technology. AI has also gone through periods of lesser excitement, with similar promises which unfortunately didn’t live up to its lofty expectations and resulted in “AI winter” periods. But today, 60 odd years since its foundations as a science and engineering discipline, despite the current hype on AI the prospect of another AI winter is remote, and AI is poised to become the engineering foundation of 21st century technology systems. The kind of autonomous, intelligent, resilient and fault tolerant systems that will serve our coming world of IoT, drones, synthetic-everything and self-driving vehicles.

Regardless of whether we will get to that utopia world sooner than later, we need to focus more resources on work to ensure we fully grasp and understand each evolutionary step of AI before we embark on to the next one. There is no doubt that, eventually, AI can deliver (fragments of) the automated world we used to read and watch in sci-fi movies. But how we get there is crucial.

Commercial push

First, there is an unprecedented boom and commercial interest (AI to become a $50B industry by 2020) for AI applications. Large technology conglomerates drive much of that interest, as the stakes are higher than ever: mastering the current state-of-practice and owning the state-of-the-art for AI gives you a unique competitive advantage over your peers. But the narrative ought to be more open and point not only to the great things that AI can do today, but also all the things that we can’t do (at least not yet)

  • Creative tasks (generic problem-solving, intuition, creativity, persuasion)
  • Adaptability (situational awareness, interpersonal interactions, emotional self-regulation, anticipation, inhibition)
  • Nearly instantaneous learning (AI systems need humongous data sets for training, on the contrary, humans deploy common sense, general knowledge and  associative cognition techniques to grasp new concepts and adapt almost instantaneously in new situations)

State of practice

Second, there is the research, engineering and academic practice of AI today: much of the focus and success stories of AI nowadays are in the machine learning (ML) sub-field of AI, with renewed interest in neural networks – NNs (and their recent incarnation, deep learning (DL). Distinctly different from the GOFAI (good old fashion AI) practice, these engineering breakthroughs owe much of their ingenuity to a clever combination of mathematics, statistical sciences and attempts to simulate human neural networks. That connectionism approach to emulate intelligence works well in certain domains but we need to consider the down side when applying these at will:

  • Known unknowns: with the current practice of deep neural networks, those that have hundreds of layers like for example the winning NN for the ImageNet image recognition competition had more than 160 layers, it is plausible to arrive at answers that although they appear correct there is no way we can comprehend and trace the answer’s origins. This is a known problem acknowledged even by pioneers in this field. This could lead in situations where AI becomes increasingly difficult to figure out how it achieves its results; not being able to explain why the answer is the way it is or simply unpack the reasoning steps that were used to derive that answer. Progressively trying to understand interactions between NNs is one way of uncovering the reasoning steps, down to the neuron level.
  • Right tool, wrong problem: today’s practice of NNs and more specifically DL and in general data science use a plethora of well-known algorithms, techniques, and engineering knowledge from statistics, maths and neuroscience- typically dozens of algorithms are available for any given problem. But choosing the right algorithm for the right problem is not an easy task. Even with the right apparatus in hand, we could still end up misinterpreting our assumptions for the real world and lead to selection bias
  • Game for the rich? Undoubtedly, NNs have achieved incredible things over the past 4-5 years, from the Jeopardy TV quiz show to AlphaGo conquering to playing Poker, and beyond. Regardless of the endless possibilities that these engineering feasts open up, we should also note that the AI systems built to achieve all that used vast resources and effort: for example the AlphaGo used 280 GPUs and 1920 CPUs; this is a lot of hardware in absolute terms; both teams at IBM and Google used dozens of the world’s brightest engineers and millions of dollars invested. Progressing AI to the next level it will almost certainly mean make it affordable for everyone, and efficient down to the level of a google search query. We are not anywhere near that yet, but initiatives for democratizing AI can certainly help move us to the right direction.
  • Give me (more) data: Current AI practice is data hungry. The more the better. And these data should be, ideally, clean of any noise that could result in contradictions at the training phase. For example, leading ML systems used millions of labelled images for identifying correctly the object at question; in contrast, a young child can use only one image. We need less data hungry and more intelligent, symbolic based input for our state of the art ML systems.

Also need to bring the symbolic AI and connectionism to work together, like the alphaGo use of tree search and other search algorithms in combination with its deep learning network.

Impact

Third, nowadays more than any other part of AI’s history, we need to worry and care about governance and protocols for AI applications. Jobs’ automation, elimination of entire processes and tasks, and even industries are possible with widespread uses of AI. Regardless of the mitigating measures to turn this around and make AI an economic boon for everyone, we need to consider how we regulate AI applications and prepare for its (future) impact:

  • Augment, not replace jobs: Automation of manual, repetitive tasks is coming, and is coming fast. And AI is main culprit. But, automating somebody’s job does not equate automating the entire job function and replacing the job holder with an AI driven machine. What we could automate is the repetitive, manual, tedious tasks of the job that occupy a significant portion of someone’s daily job. In recent survey of US occupations these tasks found to be taking up more than 50% of time spent at work

:

 

 

 

All that freed-up and capacity could then be used to do higher order cognitive tasks that we can’t automate and need human judgement. Essentially, we aim to augment jobs, not eliminate them.

  • Self-regulate: Government led, industry best practice, legal mandates, and other stipulations could provide guidance on how to bring in and use AI automation, but self-regulation could also work: use of workforce and organisational dynamics to inform best strategies for automation – when, where and how? For example, Aviva, a large UK insurer polled its employees on the impact of robots automating their jobs. This direct approach gives transparency to the process and explores execution paths in a coherent and open manner. Retraining possibilities for workers whose jobs could be automated, include writing the scripts and business logic that robotic process automation (RPA) tools will use – therefore augmenting their jobs to a higher cognitive order, not replacing them; but working alongside their RPA mechanical peers.
  • Ethics: the widespread applications areas for AI gives us an interesting puzzle: who’s going to govern AI applications and in what areas? Say, you’re concerned about critical national infrastructure, and AI technology, inadvertently impacts the cyber-defences of a nation, you naturally trying to restrict its uses and reach. But what if the same technology is used to fight back the cyber threats, and identify or cure diseases or used to provide banking services to the billions of unbanked? The decision making of ethics’ committees should be elevated beyond popular opinions from celebrated leaders, business gurus,and luminaries and based on industry, sector and application specific frameworks with input and regulatory oversight built in to ensure alignment with societal values. Dealing with future AI applications is even more challenging as more is yet to come:  for example, carefully trained AI systems could predict accurately human right trials, paving the path for a future of AI and human judges working in sync. replace human judges. In these situations we should seek to develop AI systems that move away from the binary nature of their answers and accommodate other paradigms, such as fuzzy systems or good enough answers in the context of open, semantic systems.

Clearly AI is going through a remarkable renaissance. Attention, funding, applications, practitioners, systems and engineering resources were never so many, and concentrated in its 60-odd history. But as AI becomes increasingly relevant to almost every industry and human activity we also need to be aware of the need to regulate and channel its transformative power to truly enhance human lives and experiences.

Automation

6 vulnerabilities of blockchain technology and how we will…

  • 05/03/201725/03/2017

“the future belongs to those who see possibilities before they become obvious”, T.Levitt

Blockchain technology is an exciting and fascinating topic: In its short 8 years of existence, it attracted massive attention from the world’s brainiest folks, billions of dollars in funding, spurred thousands of new companies (startups mostly), promises to turn upside down almost every industry as we know them today, and provide the foundations for the next internet, the internet of value. But is this grand vision and lofty target feasible and on target?

That’s a rather rhetorical question as it attempts to tackle a utopian situation so I’d rather focus on practical issues as I identified them from my experience and exposure to blockchain: 6 key, broadly defined, vulnerabilities that more concentrated work is need to help us come closer to the utopia described above. For the sake of brevity, I will refer to blockchain as a one-size-fits-all term, and inclusive of public blockchains (bitcoin, ethereum, etc.), privately deployed (permissioned), distributed ledgers (non-blockchain based) and shared ledgers.

Identifying vulnerabilities early in the course of a transformational technology’s journey is not necessarily a contrarian move. Rather, it helps to constructively identify areas which are vulnerable and helps practitioners focus on ways to overcome subtle problems (as for example, the vulnerabilities I identified for the visionary semantic web technology). That way, you acknowledge weaknesses early enough so you can apply remedies before stuck in fruitless explorations and waste resources and time. Technology companies have a way of tackling such situations with pivots, when they can.

Blockchain technology enjoyed a unique situational opportunity: emerged in the aftermath of the 2008 global financial crisis (GFC) it helped, as it presented itself with huge transformational potential. Unique triggers that aligned powerful forces in the global financial and geo-political ecosystem influenced and enhanced blockchains rapid trajectory: the 2008 GFC brought about a nearly catastrophic erosion of trust for banking institutions from large portions of the public, financial services institutions forced or willingly embarked on digital transformation journeys, the sharpest minded folks of the desponded pubic sought ways to take full control of their financial affairs away from the banks, great technological innovations helped with seemingly unlimited computational power and planetary scale distributed systems; all that combined led to the birth of bitcoin in 2009. And then blockchain emerged. And the rest is history. Nowadays, blockchain applications and pilots in the financial services are omnipresent, funding is in abundance and it seems that large corporates and VCs can’t have enough of the pie, and continuously invest.

But what shall we expect, ideally, from such a unique confluence of events and resources with blockchain technology: T+0 trade processing, crypto-currencies at the (digital) pocket of every individual, crypto-assets available 24/7/365 for immediate trading of any asset and transfer of value globally from anyone to anyone using sophisticated smart contracts, elimination of double books record keeping, on-demand automation, intelligent autonomous oracles regulating the flow of value through execution of smart contracts, eventually realizing DAOs (neatly engineered and governed, to avoid well known issues with first attempts in this space); and all that will ideally bring benefits to the society as a whole and tackle the huge issue with the more than 3 billion folks out there who don’t have access to banking.

Vulnerabilities

To get to that mind-boggling future, we need to do more work on:

1. (Universal) adoption 

Blockchain is great, if every market participant is using it. You need two hands to clap. It’s a network and membership should be seamless and streamlined. The bigger the network, the maximum its effect and benefits for its participants. In the capital markets, sell-side, buy-side institutions, intermediaries, exchanges and agents (retail and institutional) and ultimately regulators all need to be in it. One way or another. Blockchain has the transformation effect of “math based money” where the notion of a trusted third party is not present or needed (at least in today’s form), but the current complex financial markets ecosystem won’t change overnight. Practically it can’t, and tactically it shouldn’t.

How to remedy: As every journey starts with a first small step, blockchain adoption started already in small and concentrated pockets of the ecosystem where existing functionality can change to peer-to-peer (p2p) with relative ease; emergence of private blockchain systems helps counter initial roadblocks derived from public blockchains reliability perceptions; consortia bringing together many and diverse market participants emerged and proliferate; that helps understand dynamics, vested interests, and shared goals; these consortia could also lead to much needed standards in this space (so that everybody adheres to standard). 

2. Standards 

Capital markets participants will need to agree on standardisation across various asset classes digital representations on blockchain(s) and their underlying securities. Failing to do so in a cost-effective and timely manner, and with agreed upon practice on how to conduct, could undermine the benefits of a blockchain in the first place.

How to remedy: consortia is one way to foster collaboration and work on blueprints for blockchain(s) standards. Other ways is to get sector regulators weigh in and force principled ways of conducting business with blockchain(s). Standards take time to agree upon, mature and adhered to. But the first steps toward standardisation for the conduct of business using blockchain look promising.

3. Scalability

Current financial markets enjoy some of the most complex and scalable, operational, technology of all industries. For example, in the transactions space, the VISA network averages 2,000 transactions per second (tps) and at peak times that figure goes up to approximately 50,000 tps. By contrast, the largest public blockchain out there, bitcoin, averages 7 tps. In addition, block size limits present some interesting challenges for system architects as not much data can squeeze in the few megabytes available (ranging from average 1MB to 8MB with some clever engineering). Also, time taken for processing and validating transactions (mining) affects the throughput rate and importantly, the ordering of transactions (which could lead to undesirable double spending effects, Sybil attacks, etc.)

How to remedy: if it is only a matter of firepower, then we see that continuous work and relentless innovation produced already some dividends: Symbiont reported 87,000 tps in certain transactional circumstances, and there is ongoing work for scalable block size limits with the emerging Lighting Network. But viewing the scalability issue of blockchain(s) purely from a firepower angle is misleading; new architectures and clever ways of combining the best bits of rudimentary blockchain(s) is another way to achieve scalable, enterprise grade blockchain(s).

4. Regulations and governance 

Blockchain(s) can be a boon for regulators (as it could potentially help them prevent another 2008 style GFC, as stated in a White House hearing last year), but also a challenge. As blockchain(s) present, at least in their native form, a self-regulated network of transactions from a participants’ perspective; regulators and policy makers need to weigh in to protect the consumer from improper activity.

How to remedy: global regulators and central banks begun to notice blockchain over the past couple of years: from the high profile ones to local State regulators, from the West to the East, regulators all over the world are gearing up for the blockchain era. For example, in just a few days, SEC will deliver their verdict on the Winklevoss brothers bitcoin ETF application; if regulatory approval is granted, it could open the floodgates for retail investors and aficionados of cryptocurrencies as, as much as $300M could be invested in that ETF alone in its first week. Regulators should also take into consideration the dynamics of regulation in blockchain environments: enforce vs. make the rules. Enforcing the rules in such environment is mechanical and given by the very nature of blockchains; but making the rules defies the logic of using a blockchain in the first place! This is the “governance paradox” of blockchain.

5. Anonymity and off chain world

One of the cornerstones of capital markets trading strategies and dynamics is that buyers and sellers do not always have to reveal themselves to each other or make their commercial intentions known prior to a trade. With blockchain(s) technology, and their self-regulated, open network with the underlying immutability property for transactions, this is not easy to achieve. On the other hand, regulators and policy makers need to ensure access and traceability is technically and rightfully a feature they can rely upon to do their job.

How to remedy: the anonymity of blockchain(s), mostly public ones with proof-of-work consensus mechanism, can be easily tackled with adopting private, permissioned, blockchain(s), or even a different consensus mechanism (see, for example the excellent review of G.Samman on consensus mechanisms) where the validators are known and trusted (resembling today’s model of market intermediaries (ACHs, CSDs, etc.). Regarding the immutability property of blockchain(s), not all data need to live on the chain – there are certain circumstances where data and triggers can, and should be, off the chain: in a parametric micro-insurance context, for example, oracle data feed smart contracts to enable trigger based execution when a condition is met. But, as smart contracts will be executed independently by every node on the chain we need to guarantee that every smart contract will receive the exact same information from the oracles so that we have deterministic computation, with no inconsistencies due to lapse of network uptime or temporal unavailability of oracle data input. A workaround is to have oracles push data onto the chain – rather than smart contract pulling it in – that way we guarantee that every node sees the same data. Hence, oracles (and other data and business logic structures) can live off the chain and feed in the on-chain components on demand or as per protocol design.

6. Switching cost 

Adopting blockchain(s) is not a weekend system upgrade before markets open again on a Monday. Even the simplest POC and pilots take weeks to come to fruition (excluding the “quick and dirty” hacks done is a few hours which do not intent to produce enterprise ready production systems), let alone intentional systems’ replacement which can take years to materialize. There is plenty of reports and market research out there pointing to billions of dollars in cost savings as a benefit of adopting blockchain(s). But there is little evidence and reporting on what the switching cost is or even how to quantify it.

How to remedy: practically speaking, we need to wait a few more years until we complete cycles of switching large, core, financial market infrastructure systems to blockchain ones. Early work points to clever ways of preparing such a switch (for example, BNYM’s BDS 360 system), but we also need to understand what works and what doesn’t (see, for example Deutsche Bank’s commentary on the bitcoin use failing to eliminate intermediaries). General guidance and sharing experiences also helps, and we need more of that.

The road to adolescence

Arguably, these are not insurmountable obstacles, with all the attention and resources devoted to blockchain technology development it’s only a matter of a few years before we have solid blockchain systems underpinning the global financial services ecosystem. But in other industries blockchain has, arguably, more immediate impact potential with things like supply chain management, ID resilient management, records management (healthcare, personal, government, real estate, etc.), insurance (parametric, contextualized, micro, p2p) and other initial explorations. As blockchain applications progressively mature in other industries, the great catalyst that will bring about fundamental change will arrive: Consumer. A great catalyst by their sheer volume, when consumers start using blockchains(s) and services build on them, we will experience impact equivalent, at least, to the arrival of e-Commerce on the web and growth of social media.

But the current focus should also shift from enhancing existing mechanics of blockchain(s), like getting higher throughput for tps, bigger block sizes, better consensus mechanisms, flexible governance protocols, etc. – to bring in new techniques and knowledge from other fields and practices. For example, smart contracts current work could benefit by a solid body of knowledge and work done in AI with autonomous multi-agent systems (think here, agents as in smart contracts on blockchains), especially in automated coalition formations and mechanised trust protocols. Industry practitioners could benefit from engagements with academia, as we clearly witness in the engagement, for example, of Barclays and Imperial College London for smart contracts research. One would observe that unlike other transformational technologies, blockchain had little to do with academic research and development, initially at least. Almost a “not invented here” syndrome kept the best academic labs muted in the first few years whilst much of the work done was driven by communities of practitioners. But this started to change lately, with world class research hubs forging partnerships and setting up blockchain specific labs.

Transformational technologies take time to pay back heavy dividends. It’s a long and impactful journey, not a quick sprint with a shocking effect. The pace of substituting old technology with new is very dependent on ecosystems, and we see that blockchain is doing well on that front. Most impactful innovations arrive at the end of the hype cycle and tend to stay with us for a long time. Blockchain technology is getting there and will keep us busy in its journey to adolescence.

Recent Posts

  • Blockcain and privacy considerations
  • To regulate or not? How should governments react to the AI revolution?
  • Blockchain for business

Categories

  • AI ethics
  • Artificial Intelligence
  • Automation
  • Autonomous Vehicles
  • Blockchain
  • Innovation
  • Machine Learning
  • Policies
  • Robots
  • Uncategorized
Yannis Kalfoglou @2011-2023
Theme by Colorlib Powered by WordPress