The Present and Future of AI

The Present and Future of AI

The Present and Future of AI

A recent Zoom webinar sponsored by Bernstein and Perceiver AI, entitled Advanced Artificial Intelligence and Latest Trends: The Present and Future of AI, featured a number of leading thinkers in the field of AI. Included in the event were Bernstein Research Senior Vice President Mark Moerdler, Perceiver AI executives Howie Altman, CEO, and Milton Hernandez, Head of R&D, Rahul Singhal, Chief Product and Marketing Officer, Innodata, Inc., Roderick Schrock, Executive Director, Eyebeam, and Galina Datskovsky, Board of Directors, Vaporstream.

To watch the replay, click here: https://lnkd.in/ePWGJ_9t (Passcode: 62418498).

Topics discussed in the webinar included:

  • The latest applications and modalities of AI
  • Emerging trends and neural networks
  • Looking to the future – what’s next (including genetic programming, the digital equivalent of evolution itself)
  • How AI will drive new opportunities in the future and what new opportunities we are already seeing that are fueled by recent developments
  • How artists are incorporating digital technologies into their creative woks
  • The role of equity and inclusion in the growth of the AI industry
  • How governments, companies, and individuals should embrace and support AI in order to maximize growth
  • Strategies that should be employed to capitalize on opportunities and exponential growth available to us right now

Below we provide summaries of the key points made by the webinar speakers:

Galina Datskovsky

First to speak was Galina Datskovsky, Board of Directors, Vaporstream, who started by defining AI as, according to the dictionary, a branch of computer science dealing with simulation of intelligent behaviors in computers. She emphasized the word simulation, stressing that AI is not designed to replace humans but instead to simulate human intelligence for some purpose by imitating human behavior. Datskovsky stated: “We can’t expect the machine to behave differently to encompass every behavior that humans exhibit, but we want to imitate certain behaviors to hopefully get good outcomes.”

She mentioned that it’s important to think about AI in terms of its uses, one of which is language recognition, which she had the privilege of working on in the early days of its development. One place where you see emulation of human behavior is in computer vision, such as in semi-autonomous vehicles with features like lane assist using cameras via computer vision. AI is also used in robotics such as the famous Roomba vacuum cleaners.

Datskovsky cited machine learning, a huge part of AI, as something we’re much more familiar with, because it is emphasized so much in the headlines. She also mentioned medical treatment and diagnostic systems as areas where machine learning can be helpful.

AI can help find the most effective treatment, or help pioneer custom medical treatments

By sifting through volumes of data about disease treatment for various demographics, AI can help find the most effective treatment, or help pioneer custom medical treatments such as, for instance, taking a tissue sample from a cancer patient and injecting that sample into each particular cell of a multifaceted microarray with a different combination of chemo and custom treating a patient.

This is only possible with AI, she said, because the AI algorithm using computer vision can recognize how many cells are dead or alive after a certain point of time and say this is the medicine that’s going to work best for this particular individual. Tremendous advances such as this can stem from the right application of AI.

Three broad reasons to invest in AI

She identified three broad reasons to invest in AI.

    1. To build systems that think like humans or exactly like humans, the latter form being what would be termed strong AI.
    2. To just get systems to work without figuring out how humans work or reason, which is weak AI, because we are just trying to get to an outcome.
    3. To use human reasoning as a model, but not necessarily the end goal. This is really important, because the end goal of most businesses is not: let us emulate people – it’s about using reasoning and getting the best results for the business.

Most commonly, Datskovsky said, the industry builds AI with the third point in mind. What does this mean? It means they don’t necessarily want to emulate human behavior but to achieve their business objectives. So, Amazon, for example, builds machine learning systems, and it really wants to improve how it offers consumer items by understanding their preferences. “But it’s not,” she said, “emulating a person who would sit with you and analyze your preferences, it’s just trying to analyze data to do the best they can to emulate that.”

Mark Moerdler

Mark Moerdler, Senior Vice President at Bernstein Research, where he covers global software, spoke about AI and machine learning from an investment perspective.

AI is permeating every facet of business

He sees AI as impacting investing in three broad ways:

  1. He is seeing an increasing use of AI technology to help in the investment process and more investment managers aggressively leveraging AI while others take a more conservative approach as they wait to see the technology prove itself out.
  2. AI is permeating every facet of business, improving businesses products and services, but also making new products possible. Understanding the opportunities and risks this creates is important when thinking about it from an investment point of view.
  3. There are many companies that are all about or only about the use of AI and more are being founded every day. We’ve only recently reached the point where these companies are going public.

Within the investment community, Moerdler said, we’re seeing AI being used in a myriad of ways within these three categories with the first being the most prevalent one. We’re in a world where the volume of data being created every day is increasing massively, driving the need for more powerful tools to analyze the data.

He said AI is being used to analyze and understand the vast amounts of data that are available about the economy, about every industry and about many companies themselves. One example is using AI, and more specifically machine learning, to monitor social media to understand changes in customer buying patterns.

Another related example is Adobe, which leverages the company’s own technology and puts out a report before the holiday buying season detailing the trends in online and in store purchasing – they’ve been incredibly correct months before the buying season really kicks off.

When Moerdler joined Bernstein Research, he attended a call given by a former intelligence expert on reading body language and understanding the word choices that management teams make to really understand what they mean. Today, he said, there are tools that analyze transcripts and press releases to glean a better understanding.

More recently, he said, we’ve seen the emergence of funds that are using AI to pick stocks, generating short term trades or options to capture dislocations within their valuations in the market. Today there are funds where the principal idea generation is coming, we understand, from AI applications. While he finds this interesting, he personally doesn’t believe the technology is there yet, but we’ll see.

Moerdler stated: “We’re seeing AI being leveraged in three main ways within companies themselves, and not just technology companies:”

  1. Many products can be enhanced using AI: for instance, the collision avoidance technologies in cars, as well as what we see in smartphones and what is used daily in the development of drugs
  2. AI shortens the software development process, and with new and emerging capabilities in machine learning and quantum computing, those improvements can massively accelerate the rate of development as well. Many of us may not realize how many of the products we use today would not be possible without AI. Whether you consider Amazon’s Alexa or search engine personalization, AI is making it possible to solve previously unsolvable problems.
  3. Companies are using AI to improve their own businesses, with supply chains being a perfect example as they have become far more complex and variable, and AI can help to optimize the process. Another example is building products virtually first to help the design process and then testing them in simulators using complex real world conditions.

Three main ways that investment managers are looking to leverage the value AI is creating

Moerdler identified three main ways that investment managers are looking to leverage the value AI is creating.

  1. Traditionally, there were those that were investing in public companies and those who were investing in private companies, and they will often be different. And if you wanted to invest in a business where AI is core to the business value, then you needed to invest in a private company directly via private equity funds. Today, we’re seeing a marked increase in funds that are investing in both public and private vehicles and AI is one of the areas where we’re seeing significant increase in focus on the private equity investment side.
  2. They are finally seeing IPOs of companies that proclaim that AI is core to their business. Whether this is true or not is up for discussion, but they’re seeing companies which layer leverage AI technology to a greater or lesser extent. He expects more AI-centric public companies over the next year if the IPO market continues to hold up but, note, these are relatively small companies burning cash with strong growth and even stronger multiples.
  3. The biggest use of AI today is within software or in fact in the large public companies. For example, Adobe has built the largest breadth of AI products. Microsoft, Google and Amazon have the widest range of AI functionality that customers can use to build AI-enabled applications. Consumer internet companies such as Google use AI everywhere and Netflix uses it for recommendations and every social media company is using it for targeted advertising and to learn who you are and what you want.

Howie Altman

Howie Altman, CEO of Perceiver AI, spoke about the company’s approach. He began by discussing a topic most in the industry are familiar with, what’s called deep learning or neural networks – the things that IBM’s Watson is built on top of. However, Altman said, Perceiver has taken a novel approach called genetic programming.

In his presentation, he walked through some of the actual business outcomes and impacts that this approach is capable of producing that exceed those of Watson and other similar forms of AI, and closed by briefly discussing the future and what people may have heard being referenced as artificial general intelligence.

“So, to give a little bit of context about deep learning and neural networks,” Altman said, “we have had a ton of advancement in the space in the last several decades, but there are still very real limitations.”

One he cited is that when you’re training something on technology like Watson or Tensorflow, which are two of the most common deep learning platforms today, you’re starting the learning process from scratch every time – you can’t meaningfully leverage past learnings.

You’re also limited to pattern matching and recognition, which is something that the human mind is very strong at – which is why it was modeled that way; however, there are many problems in the world that require tools outside of pattern matching.

Models produced by deep learning are black boxes – you can’t actually look inside them

Altman said that one of the biggest things people may have seen in the news is that the models produced by deep learning are black boxes – you can’t actually look inside them, you can’t inspect how the artificial intelligence produced the outcome, or the recommendations that you’re seeing. For those who want to take a deeper look, he cited a recent paper published by DeepMind called “Neural Algorithmic Reasoning” that summarizes some of these limitations very well.

Genetic programming is a different approach

Genetic programming, Altman said, is a different approach to AI than deep learning; it’s actually been around for over half a century, but, until recently, with Perceiver’s implementation, there hasn’t been a version that was practical for real world problems. He mentioned that there have been some very bespoke solutions using the approach. For instance, the NASA antenna developed using a related approach called genetic algorithms, creating an antenna that had a wild shape, but was so much more efficient than anything humans had ever designed before.

Perceiver’s genetic approach is equivalent to natural selection itself

In terms of Perceiver AI’s product, the company has developed an implementation of genetic programming. The genetic approach is equivalent to natural selection itself or the biological evolutionary process, Darwinism, survival of the fittest. However it is best referenced, Altman said, as creating an optimized algorithm or solution to a given problem that isn’t constrained by existing deep learning approaches, so it looks beyond patterns – it looks for other relationships in the data.

Altman stated: “As you discover new insights you can actually take those insights and put them into what is effectively the digital genome. If you use the biological genome as an analogy, as the building blocks of life or the building blocks of a solution, and you can take any knowledge that we have as a species and any proprietary knowledge that your company might have, you can use those building blocks to seed the evolutionary process.”

He added, “the way we like to think about it is instead of starting as a baby, who has to learn everything from first principles, you start as a very well-educated PhD student, which shortcuts a lot of that process, saving time and money. Additionally, the models, are unlike black boxes in that they’re completely transparent because, at the end of the day, what you get is a block of code; it’s an algorithm, it’s not a black box that you cannot inspect – you can actually inspect it and you can use that code in any application you want.”

The interesting part, he said, is what applications are candidates for the process. Some of the ones Perceiver has had a lot of success with to date include:

  • Logistics and transportation, specifically in the airline space. Perceiver is being used to minimize fuel costs and CO2 emissions. And soon Perceiver will be used to do things like optimizing crew scheduling and preventative maintenance on aircraft, especially the engines. “We are entering right now into a partnership with a UK based software firm,” Altman said, “which is a leading provider of software for the charter airline space. We already have a few contracts with them that we will be starting work on shortly related to the vertical takeoff and landing space primarily in Europe.”
  • He also mentioned energy and chemicals, where there are many use cases that require optimization – with optimization being a sweet spot for Perceiver.
  • Another pilot the company is going to be starting soon involves working with the Princeton Plasma Physics Lab to determine if Perceiver can be valuable in accelerating fusion R&D.
  • The company performed a pilot with Frontier Airlines that involved a technique called tankering where you purchase more fuel at one stop than you need because it is cheaper. But that actually adds a lot of complexity, because now you have more weight on the plane, which increases drag and, depending on weather and other conditions, time since the engines were last serviced and so on. Perceiver’s procedure was actually able to provide a 30% better benefit which equated to six and a half million dollars of savings in fuel per year.

Artificial General Intelligence (AGI), Altman said, is the term used to describe an artificial intelligence that’s capable of basically performing any intellectual task that a human can. And then beyond that it’s going to surpass what humans are capable of. “These are things that are very fascinating,” he said, “and we’re still a ways away from that, but we believe that Perceiver as well as other technologies and approaches that we’re exploring in parallel will help in achieving that goal.”

He added, “We have to be very thoughtful and cautious because you’ve also probably heard about the singularity (which is when artificial intelligence exceeds human intelligence). We don’t want to create an artificial intelligence that is going to solve problems by creating larger issues for humanity or the planet, so we have to approach this in a very humble way because there’s going to be a point in the not-too-distant future where artificial intelligences will be structured in ways that we as humans won’t be able to understand – or at least not easily.”

Roderick Schrock

Roderick Schrock, Executive Director of Eyebeam, said the organization offers a platform for artists to engage technology and society. “Essentially,” he said, “what we do is we fund artists to develop and invent new projects and sometimes products that are creatively engaging with technologies that are impacting society at any given moment.” He added, “We’ve been doing that the past 20 plus years supporting about 500 alumni during that time, and as you might imagine that kind of mission keeps us on our toes in that we’re constantly helping support creative reflection around how we can approach technology from a human perspective and, in our case, that human perspective is one of a unique and singular type of creativity.”

As we’re moving into an age where machine learning is, increasingly, the issue of the day, Schrock said, more and more of our artists are telling us that they need support for distribution and production of works that are doing exactly that and creating space for contemplation to allow the public to have that poetic moment that allows us all to sort of reckon with the changes that are coming down the pike, both from a very positive standpoint as well as some of the potential pitfalls that we might want to avoid.

Schrock focused on two artists in his talk that he felt were doing spectacular work in creating space for reflection. The first artist, Lauren McCarthy, is an artist who examines social relationships in the midst of surveillance automation and algorithmic living. She’s also a programmer who has coded in an open source art and education platform that prioritizes access and diversity and learning to code and has over 1.5 million users currently.

He singled out her project called Someone. She first presented it in 2020 as part of an exhibition held in Manhattan. It’s a work he’s very proud the company was able to support that went on to win the Creative Arts Electronica last year, the golden NICO award, which is sort of the premier arts award for work that is grappling with technology. “With Someone,” Schrock said, “what she did was she created a performance where she put herself in the place of a home AI system so she’s sort of playing off of the idea of Alexa and she imagined what it would feel like to be an algorithm as a human, and I think it exposes some of the tensions that we may all be feeling in terms of our relationships to these emerging technologies.”

He also mentioned Mimi Ohi, an Eyebeam fellow in 2017 who has done work, not so much exploring the actual technology of AI, but really digging into what we consider to be the data that is informing the machine learning that we’re using. “She thinks a lot about what classification means in that realm and how we think about who is similar to someone else,” Schrock said, “and what are the determinations where are the borders between those delineations between myself and the other and our computers actually making those judgments in real time with emerging technology – this classification is presented as two neon tubes in the gallery space.”

As someone approaches one of them, Schrock explained, it turns on, and then the computer learning system that is built into this sees who is next to that person, and if that algorithm determines them to be similar both tubes turn on and suddenly both lights are on. “We never quite know how those determinations are being made,” he said, “and I think it’s a way of sort of exposing that black box to more critical inquiry and to understanding that can help us get to a more positive place in terms of a relationship with these technologies.” To learn more about what Eyebeam is doing as an organization, you can visit their website at Eyebeam.org.

The AI market is exploding

Rahul Singhal

Rahul Singhal, Chief Product and Marketing Manager, Innodata, Inc., said the AI market is exploding, with the Gartner forecast projecting AI/ML to be a $200 billion market. Singhal said the training data market alone is around $8 billion.

“I think the AI/ML journey has been led by large tech firms like Google, Microsoft, Amazon and IBM,” he said, “and what we’re seeing within the data is more and more companies are jumping into and really investing in building AI solutions and technologies into every aspect of the workflow.”

He said that the old adage garbage in garbage out applies to AI. “So, if you don’t show good training data, you can have the best data scientist building the models, but you’re going to run into issues and those models are going to fail.” As a result, he said, the company has invested significantly in deep domain expertise in legal, financial services, and healthcare.

He stated: “We are working with a very large asset management firm with 15 domain experts who are, on a daily basis, creating that data that could be used to understand different events. That expertise is not easily available, so we are investing in creating that data, and then we are investing in platforms that can allow you to have continuous learning happening with the data model. We just announced our new annotation platform, and we are in the process of releasing a new AI platform, which is a no code AI/ML platform that will do continuous learning.”

Milton Hernandez

Milton Hernandez, Head of R&D at Perceiver AI, said that one of the company’s concerns is to create and enable technologies that allow you to incorporate existing knowledge into a solution. “If you think about it,” he said, “when we typically talk about artificial intelligence solutions, we are having the problem of trying to rediscover the wheel, and there is a whole history of human knowledge, there are solutions that might be perfect that are not being incorporated.” He added. “So, there is a need in the field to be able to incorporate what people know about a domain and then try to either improve it or change it in a specific way so there’s a need for that two way communication.”

Hernandez said that people who hail from the 80s remember the term expert systems, which were basically ways to try to systematize the knowledge of human experts. There is a need for that and there are a tremendous amount of computer resources that are being spent right now in just rediscovering what’s already known. As an anecdotal reference that everybody uses, Hernandez mentioned that Deep Blue, the computer program that won the world championship of chess a few years back, as the shining example of artificial intelligence, is actually a hybrid.

An expert system is less artificial intelligence, according to Hernandez. This is not only a neural network evaluating chess moves, but it has a database of smart chess moves coming from chess experts, and that is the model that Perceiver AI sees as a way to grow knowledge quickly and also to be able to share it back with human experts and continue that interchange within them.

Currently, he said, the algorithms being used that are from several decades back are open box algorithms – everybody knows how they work. But when we start applying things like deep learning and neural networks to decisions like that, they will come up with black box models where the decision-making mechanism is opaque.

We don’t know why there are 6 billion parameters that are optimized and cannot be really understood by human beings – it’s as if once you have your black box model, you apply certain evaluative measures to know how much each parameter weights in the final decision, but it’s really anecdotal and example driven.

“Another way,” Hernandez said, “which is what we’re doing at Perceiver, is to generate a model that’s understandable by a person – it doesn’t generate a billion weights, it generates an algorithm that can be read by anyone that understands the principles of computer science.”