• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Articles
  • News
  • Events
  • Advertize
  • Jobs
  • Courses
  • Contact
  • (0)
  • LoginRegister
    • Facebook
    • LinkedIn
    • RSS
      Articles
      News
      Events
      Job Posts
    • Twitter
Datafloq

Datafloq

Data and Technology Insights

  • Categories
    • Big Data
    • Blockchain
    • Cloud
    • Internet Of Things
    • Metaverse
    • Robotics
    • Cybersecurity
    • Startups
    • Strategy
    • Technical
  • Big Data
  • Blockchain
  • Cloud
  • Metaverse
  • Internet Of Things
  • Robotics
  • Cybersecurity
  • Startups
  • Strategy
  • Technical

Artificial Intelligence: What is It and Why Now?

Francesco Corea / 11 min read.
November 21, 2016
Datafloq AI Score
×

Datafloq AI Score: 85

Datafloq enables anyone to contribute articles, but we value high-quality content. This means that we do not accept SEO link building content, spammy articles, clickbait, articles written by bots and especially not misinformation. Therefore, we have developed an AI, built using multiple built open-source and proprietary tools to instantly define whether an article is written by a human or a bot and determine the level of bias, objectivity, whether it is fact-based or not, sentiment and overall quality.

Articles published on Datafloq need to have a minimum AI score of 60% and we provide this graph to give more detailed information on how we rate this article. Please note that this is a work in progress and if you have any suggestions, feel free to contact us.

floq.to/aYEK8

Artificial Intelligence (AI) represents nowadays a paradigm shift that is driving at the same time the scientific progress as well as the industry evolution. Given the intense level of domain knowledge required to really appreciate the technicalities of the artificial engines, what AI is and can do is often misunderstood: the general audience is fascinated by its development and frightened by terminator-like scenarios; investors are mobilizing huge amounts of capital but they have not a clear picture of the competitive drivers that characterize companies and products; and managers are rushing to get their hands on the last software that may improve their productivities and revenues, and eventually their bonuses.

Even though the general optimism around creating advancements in artificial intelligence is evident (Muller and Bostrom, 2016), in order to foster the pace of growth facilitated by AI I believe it would be necessary to clarify some concepts.

Basic Definitions & Categorization

First, lets describe what artificial intelligence means. According to Bostrom (2014), AI today is perceived in three different ways: it is something that might answer all your questions, with an increasing degree of accuracy (the Oracle); it could do anything it is commanded to do (the Genie), or it might act autonomously to pursue a certain long-term goal (the Sovereign). However, AI should not be defined by what it can do or not, and thus a broader definition is appropriate.

An artificial intelligence is a system that can learn how to learn, or in other words a series of instructions (an algorithm) that allows computers to write their own algorithms without being explicitly programmed for.

Although we usually think about intelligence as the computational part of our ability to achieve certain goals, it is rather the capacity to learn and solve new problems in a changing environment. In a primordial world then, it is simply the attitude to foster survival and reproduction (Lo, 2012; 2013; Brennan and Lo, 2011; 2012). A living being is then defined as intelligent if she is driving the world into states she is optimizing for.

No matter how accurately we defined this concept, we can intuitively understand that the level of intelligence machines are provided with today is years far from the average level of any human being. While human being actions proceed from observing the physical world and deriving underlying relationships that link cause and effect in natural phenomena, an artificial intelligence is moved entirely by data and has no prior knowledge of the nature of the relationship among those data. It is then artificial in this sense because it does not stem from the physical law but rather from pure data.

We then have just defined what artificial intelligence is and what mean to us. In addition to that, though, there are two other concepts that should be treated as part of this introduction to AI: first of all, how AI is different and/or related to other buzzwords (big data, machine learning, etc.); second, what features a system has to own to be defined intelligent.

We think of AI as an interdisciplinary field, which covers (and requires) the study of manifold sub-disciplines, such as natural language processes, computer vision, as well as Internet of things and robotics. Hence, in this respect, AI is an umbrella term that gathers a bucket of different aspects. We can somehow look at AI to be similar to a fully-functional living being, and we can establish comparisons to figure out the degree of relationship between AI and other (sub)fields. If AI and the human body are alike, it has to possess a brain, which carries out a variety of tasks and is in charge of specific functions such the language (NLP), the sight (computer vision), and so on so forth. The body is made of bones and muscles, as much as a robot is made by circuits and metals. Machine learning can be seen as specific movements, action or thoughts we develop and that we fine-tune by doing. The Internet of things (IoT) corresponds to the human senses, which is the way in which we perceive the world around us. Finally, big data is the equivalent of the food we eat and the air we breathe, i.e., the fuel that makes us tick, as well as every input we receive from the external world that is captured by our senses. It is a quite rough comparison, but it conveys a simple way on how all the terms are related to each other.

Although many other comparisons may be done, and many of them can be correct simultaneously, the choice of what kind of features a system should have to be a proper AI is still quite controversial. In my opinion, the system should be endowed with a learning structure, an interactive communication interface, and a sensorial-like input digestion. Unfortunately, this idea is not rigorous from a scientific point of view, because it would involve a series of ethical, psychological, and philosophical considerations that should be taken into account.

3 Types of AI

Instead of focusing much longer on this non-provable concept, I rather prefer to illustrate how those characteristics would reflect the different types of AI we are (and we will) dealing with.

An AI can indeed be classified in three ways: a narrow AI, which is nothing more than a specific domain application or task that gets better by ingesting further data and learns how to reduce the output error. An example here is DeepBlue for the chess game, but more generally this group includes all the functional technologies that serve a specific purpose. These systems are usually quite controllable because limited to specific tasks.

When a program is instead not programmed for completing a specific task, but it could eventually learn from an application and apply the same bucket of knowledge to different environments, we face an Artificial General Intelligence (AGI). This is not technology-as-a-service as in the narrow case, but rather technology-as-a-product.

The best example for this subgroup is Google DeepMind, although it is not a real AGI in all respects. We are indeed not there yet because even DeepMind cannot perform an intellectual task as a human would. In order to get there, much more progress on the brain structure functioning, brain processes optimization, and portable computing power development have to be made.

Someone might think that an AGI can be easily achieved by piling up many narrow AIs, but in fact, this is not true: it is not a matter of number of specific skills a program can carry on, but rather the integration between all those abilities. This type of intelligence does not require an expert to work or to be tuned, as it would be the case for narrow AI, but it has a huge limitation: at the current state, it can be reached only through continuously streaming an infinite flow of data into the engine.

The final stage is instead called Superintelligent AI (ASI): this intelligence exceeds largely the human one, and it is able of scientific and creative thinking; it is characterized by general common wisdom; it has social skills and maybe an emotional intelligence. Although we often assume this intelligence to be a single super computer, it is more likely that it is going to be made by a network or a swarm of several intelligences.

The way in which we will reach the different stages is though still controversial, and many schools of thoughts exist. The symbolic approach claims that all the knowledge is symbolic and the representation space is limited, so everything should be stated in formal mathematical language. This approach has historically analyzed the complexity of the real world, and it had suffered at the same time from computational problems as well as understanding the origination of the knowledge itself. The statistical AI instead focuses on managing the uncertainty in the real world (Domingos et al., 2006), which lives in the inference realm contrarily to the more deductive logical AI. On a side then, it is not clear yet to what degree the human brain should be taken as an example: biological neural network seems to provide a great infrastructure for developing an AI, especially regarding the use of sparse distributed representations (SDRs) to process information.

A Bit of AI History

In spite of all the current hype, AI is not a new field of study, but it has its ground in the fifties. If we exclude the pure philosophical reasoning path that goes from the Ancient Greek to Hobbes, Leibniz, and Pascal, AI as we know it has been officially founded in 1956 at Dartmouth College, where the most eminent experts gathered to brainstorm on intelligence simulation.

This happened only a few years after Asimov set his own three laws of robotics, but more relevantly after the famous paper published by Turing (1950), where he proposes for the first time the idea of a thinking machine and the more popular Turing test to assess whether such machine shows, in fact, any intelligence. As soon as the research group at Dartmouth publicly released the contents and ideas arisen from that summer meeting, a flow of government funding was reserved for the study of creating an intelligence that was not human.

At that time, AI seemed to be easily reachable, but it turned out that was not the case. At the end of the sixties, researchers realized that AI was indeed a tough field to manage, and the initial spark that brought the funding started dissipating. This phenomenon, which characterized AI along its all history, is commonly known as AI effect, and is made of two parts: first, the constant promise of a real AI coming in the next ten years; and second, the discounting of behavior of AI after it mastered a certain problem, redefining continuously what intelligent means.

In the United States, the reason for DARPA to fund AI research was mainly due to the idea of creating a perfect machine translator, but two consecutive events wrecked that proposal, beginning what it is going to be called later on the first AI winter. In fact, the Automatic Language Processing Advisory Committee (ALPAC) report in US in 1966, followed by the Lighthill report (1973), assessed the feasibility of AI given the current developments and concluded negatively about the possibility of creating a machine that could learn or be considered intelligent. These two reports, jointly with the limited data available to feed the algorithms, as well as the scarce computational power of the engines of that period, made the field collapsing and AI fell into disgrace for the entire decade.

In the eighties, though, a new wave of funding in UK and Japan was motivated by the introduction of expert systems, which basically were examples of narrow AI as above defined. These programs were, in fact, able to simulate skills of human experts in specific domains, but this was enough to stimulate the new funding trend. The most active player during those years was the Japanese government, and its rush to create the fifth generation computer indirectly forced US and UK to reinstate the funding for research on AI.


Interested in what the future will bring? Download our 2023 Technology Trends eBook for free.

Consent

This golden age did not last long, though, and when the funding goals were not met, a new crisis began. In 1987, personal computers became more powerful than Lisp Machine, which was the product of years of research in AI. This ratified the start of the second AI winter, with the DARPA clearly taking a position against AI and further funding.

Luckily enough, in 1993 this period ended with the MIT Cog project to build a humanoid robot, and with the Dynamic Analysis and Replanning Tool (DART) that paid back the US government of the entire funding since 1950 and when in 1997 DeepBlue defeated Kasparov at chess, it was clear that AI was back to the top.

In the last two decades, much has been done in academic research, but AI has been only recently recognized as a paradigm shift. There are of course a series of causes that might bring us to understand why we are investing so much into AI nowadays, but there is a specific event we think it is responsible for the last five-years trend. If we look at the following figure, we notice that regardless all the developments achieved, AI was not widely recognized until the end of 2012. The figure has been indeed created using CBInsights Trends, which basically plots the trends for specific words or themes (in this case, Artificial Intelligence and Machine Learning).

What is Artificial Intelligence

Figure 1. Artificial intelligence trend for the period 20122016.

More in details, I draw a line on a specific date I thought to be the real trigger of this new AI optimistic wave, i.e., Dec. 4th 2012. That Tuesday, a group of researchers presented at the Neural Information Processing Systems (NIPS) conference detailed information about their convolutional neural networks that granted them the first place in the ImageNet Classification competition few weeks before (Krizhevsky et al., 2012). Their work improved the classification algorithm from 72% to 85% and set the use of neural networks as fundamental for artificial intelligence. In less than two years, advancements in the field brought classification in the ImageNet contest to reach an accuracy of 96%, slightly higher than the human one (about 95%). The picture shows also three major growth trends in AI development, outlined by three major events: the 3-years-old DeepMind being acquired by Google in Jan. 2014; the open letter of the Future of Life Institute signed by more than 8,000 people and the study on reinforcement learning released by Deepmind (Mnih et al., 2015) in Feb. 2015; and finally, the paper published on Nature in Jan. 2016 by DeepMind scientists on neural networks (Silver et al., 2016) followed by the impressive victory of AlphaGo over Lee Sedol in March.

AI is intrinsically highly dependent on funding because it is a long-term research field that requires an immeasurable amount of effort and resources to be fully depleted. There are then raising concerns that we might currently live the next peak phase (Dhar, 2016), but also that the thrill is destined to stop soon. However, I believe that this new era is different for three main reasons: i) (big) data, because we finally have the bulk of data needed to feed the algorithms; ii) the technological progress, because the storage ability, computational power, better and greater bandwidth, and lower technology costs allowed us to actually make the model digesting the information they needed; and iii) the resources democratization and efficient allocation introduced by Uber and AirBnb business models, which is reflected in cloud services (i.e., Amazon Web Services) and parallel computing operated by GPUs.

References

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies, OUP Oxford.

Brennan, T. J., Lo, A. W. (2011). The Origin of Behavior. Quarterly Journal of Finance, 7: 10431050.

Brennan, T. J., Lo, A. W. (2012). An Evolutionary Model of Bounded Rationality and Intelligence. PLoS ONE 7(11), e50310.

Dhar, V. (2016). The Future of Artificial Intelligence. Big Data, 4(1): 59.

Domingos, P., Kok, S., Poon, H., Richardson, M., Singla, P. (2006). Unifying logical and statistical AI. Proceeding of the 21st National Conference on Artificial Intelligence, 1: 27.

Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems: 10971105.

Lighthill, J. (1973). Artificial Intelligence: A General Survey. In Artificial Intelligence: a paper symposium, Science Research Council.

Lo, A. W. (2012). Adaptive Markets and the New World Order. Financial Analysts Journal, 68(2): 1829.

Lo, A. W. (2013). The Origin of Bounded Rationality and Intelligence. Proceedings of the American Philosophical Society, 157(3): 269280.

Mnih, V., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518: 529533.

Mller, V. C., Bostrom, N. (2016). Future progress in artificial intelligence: A Survey of Expert Opinion, in Vincent C. Mller (ed.): Fundamental Issues of Artificial Intelligence, Springer: 553571.

Silver, D., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529: 484489.

Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 49: 433460.

Categories: Artificial Intelligence
Tags: AI, Artificial Intelligence, Big Data, data scientist

About Francesco Corea

Editor at Cyber Tales. Complexity scientist and data strategist, Francesco is a strong supporter of an interdisciplinary research approach, and he wants to foster the interaction of different sciences in order to bring to light hidden connections. He is a former Anthemis Fellow, IPAM Fellow, and a PhD graduate at LUISS University. His topics of interests are big data and AI, and he focuses on fintech, medtech, and energy verticals.

Primary Sidebar

E-mail Newsletter

Sign up to receive email updates daily and to hear what's going on with us!

Publish
AN Article
Submit
a press release
List
AN Event
Create
A Job Post

Related Articles

The Advantages of IT Staff Augmentation Over Traditional Hiring

May 4, 2023 By Mukesh Ram

The State of Digital Asset Management in 2023

May 3, 2023 By pimcoremkt

Test Data Management – Implementation Challenges and Tools Available

May 1, 2023 By yash.mehta262

Related Jobs

  • Software Engineer | South Yorkshire, GB - February 07, 2023
  • Software Engineer with C# .net Investment House | London, GB - February 07, 2023
  • Senior Java Developer | London, GB - February 07, 2023
  • Software Engineer – Growing Digital Media Company | London, GB - February 07, 2023
  • LBG Returners – Senior Data Analyst | Chester Moor, GB - February 07, 2023
More Jobs

Tags

AI Amazon analysis analytics application Artificial Intelligence BI Big Data business China Cloud Companies company crypto customers Data design development digital engineer engineering environment experience future Google+ government Group health information learning machine learning mobile news public research security services share skills social social media software solutions strategy technology

Related Events

  • 6th Middle East Banking AI & Analytics Summit 2023 | Riyadh, Saudi Arabia - May 10, 2023
  • Data Science Salon NYC: AI & Machine Learning in Finance & Technology | The Theater Center - December 7, 2022
  • Big Data LDN 2023 | Olympia London - September 20, 2023
More events

Related Online Courses

  • Oracle Cloud Data Management Foundations Workshop
  • Data Science at Scale
  • Statistics with Python
More courses

Footer


Datafloq is the one-stop source for big data, blockchain and artificial intelligence. We offer information, insights and opportunities to drive innovation with emerging technologies.

  • Facebook
  • LinkedIn
  • RSS
  • Twitter

Recent

  • 5 Reasons Why Modern Data Integration Gives You a Competitive Advantage
  • 5 Most Common Database Structures for Small Businesses
  • 6 Ways to Reduce IT Costs Through Observability
  • How is Big Data Analytics Used in Business? These 5 Use Cases Share Valuable Insights
  • How Realistic Are Self-Driving Cars?

Search

Tags

AI Amazon analysis analytics application Artificial Intelligence BI Big Data business China Cloud Companies company crypto customers Data design development digital engineer engineering environment experience future Google+ government Group health information learning machine learning mobile news public research security services share skills social social media software solutions strategy technology

Copyright © 2023 Datafloq
HTML Sitemap| Privacy| Terms| Cookies

  • Facebook
  • Twitter
  • LinkedIn
  • WhatsApp

In order to optimize the website and to continuously improve Datafloq, we use cookies. For more information click here.

settings

Dear visitor,
Thank you for visiting Datafloq. If you find our content interesting, please subscribe to our weekly newsletter:

Did you know that you can publish job posts for free on Datafloq? You can start immediately and find the best candidates for free! Click here to get started.

Not Now Subscribe

Thanks for visiting Datafloq
If you enjoyed our content on emerging technologies, why not subscribe to our weekly newsletter to receive the latest news straight into your mailbox?

Subscribe

No thanks

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

Marketing cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!