Artificial Intelligence: Dreams vs Reality

The Hollywood idea of Artificial Intelligence has a long history in popular culture. In reality, the AI we know and use is much more limited. Are AI proponents over-stating its benefits and driving too much hype? Can the technology live up to the expectations? In this episode, Eric speaks with Thomas Nield, AI influencer and author of the popular article, ‘Is Another AI Winter Coming?” They discuss the realities of AI, why popular (mis)understandings of the technology can be dangerous for companies looking to implement it, and some very real fears for the future of AI in finance.


Thomas Nield is a speaker, author, technology developer and business consultant for Southwest Airlines. Proficient in Java, Kotlin, Python, SQL and reactive programming, Nield is an open-source contributor as well as an author/trainer for O’Reilly Media.


Eric: 00:03 Welcome to The Finance Frontier. I’m your host, Eric Hathaway. Today we have the opportunity to speak with Thomas Nield, who’s a business consultant at Southwest Airlines, often balancing technology with operations research. He’s also an author and a trainer with O’Reilly Media and he’s written two books, Getting Started With SQL and Learning Rx Java, and regularly contributes to OSS projects, but wrote an article recently on the AI winter, or “Is Another AI Winter Coming?”

Eric: 00:40 Thomas, welcome to The Finance Frontier, and why don’t you give us a little bit of an overview of you, your interest in AI, and a little bit about why you wrote the article, The AI Winter Is Coming.

Thomas: 00:52 Yeah, sure. Yeah, so primarily what I do at Southwest Airlines is, I work in operations research in the department that comes up with schedules. So there’s a lot of optimization and machine learning is a conversation that comes up but has never really applied to what we do necessarily. Although it’s definitely a buzz word people brought up before. And I do also speak at conferences. I’ve spoken at KotlinConf in San Francisco as well as Amsterdam for the past two years, and I do a lot of exploration on the side that I sometimes bring back to work with me. And I’ve spoken at my workplace as well on that very subject of AI. As much as I have reservations using that word because it’s very broad. But that’s the gist of what I do.

Eric: 01:40 Awesome. Well it sounds like you have a perspective here that you’ve written about, that there is a potential winter coming. Can you give me a little idea around why do you really believe that that AI winter is headed here on top of that?

Thomas: 01:59 Yeah, so I think an AI winter is coming. So what’s interesting about AI is that it has a long history, and there have been different methodologies that have been successful at certain tasks. But the expectation people had for what it can do well exceeded the task it was equipped for. So if you look back in the 60s, 70s, 80s, before the first major AI winters, a lot of the algorithms are focusing on search algorithms, as well as tree search and all of that. And for instance, researchers when they were having a computer play a game of checkers in the 60s, they were astounded that it could actually beat a person. And they made that leap saying, “Oh wow, this algorithm is actually thinking.” Of course, that was the prelude to the AI winter when people realized, “Okay, you know what, just because it can play checkers well does not mean that it is reaching on par human level intelligence. It’s just doing well at that one task.”

Thomas: 03:02 And I think that’s really a key component of, I know, what I believe and I think there’s more and more sentiments growing about this, is that just because you found a tool that is great at a certain, a very certain number of tasks, it does not mean that it can do all tasks equally well. And I think deep learning has become that new thing that excites people very vaguely, but it seems to only be succeeding at certain tasks. But it’s by no means going to create a Skynet anytime soon. There are a number of experts that are very clear about this. I think that’s why the AI winter’s coming, because there’s that disconnect.

Eric: 03:48 So I guess one of the industries that I compare it a little bit to is space exploration. I think there was a few years where we went to the moon and everybody got super excited about that and then it died on us, right? But it’s come back with a real fever around, even now, private sector venturing into space exploration because the technologies are more advanced. With the evolution of technology, that I guess would be my challenge question to you, there’s just so much more now we have to work with than we had in the past. Do you really believe that we’re not going to continue at that fast pace?

Thomas: 04:25 Well, we are in a totally different place than we were a couple of decades ago, even one decade ago. You’re absolutely right. And we have an immense amount of resources and immense amount of accessibility to these resources and this has definitely created an AI renaissance, and it’s democratized it a little bit, which is great. However, Moore’s Law, which essentially states that every two years we get twice as much computing power. That has actually stopped, as of a couple of years ago. Things will get faster. We will find ways to keep scaling. We always do. But I do think that we are going to hit that limitation in terms of how much resources it takes just to do that one task.

Thomas: 05:12 And at what point do those costs and the developments to succeed at that one task, do the resources are no longer are attainable? I think there’s other aspects besides just the scaling, everything from our ability to verify solutions and calculate solutions just as quickly. Again, we’ve got to define the scope of what we’re trying to achieve. Are we trying to be good at one task? Are we trying to create something that’s good at any task?

Eric: 05:39 Yeah, and that’s a great point. I think the term AI got so blown out. Everybody has to create an AI department now and and do something, but what is it that we want to do?

Thomas: 05:47 And yet, because I think just by the nature of how technical this is, and anything that’s deeply technical, a majority of people are not going to understand. What’s even more interesting is even software developers and programmers who are not familiar with mathematical modeling and everything, they will buy into it too. So if even technology professionals buy into it, then you can only imagine what the general population will think in terms of Skynet capabilities. Within corporate environments, there is that problem. And I’ve seen it in many places and I’ve heard many funny and sad antidotes about it, where he’s like, we need to be AI driven and nobody even stops and asks, what does that even mean?

Eric: 06:30 Right.

Eric: 06:38 But I will say, and I think you talked about this in your article a little bit too, that the marketing, right, the marketing and the naming of AI and associating it with robots, and … I think that might have done a little bit of damage to us in regards to the consumer adoption. If it weren’t marketed as such and associated with Skynet or the Hollywood versions, do you think, do you still believe that we would see this AI winter? Do you think the technological advances that we’re seeing this are going to disappear or is it really just the hype around the branding?

Thomas: 07:17 Yes, absolutely. I think that’s a great question. What’s interesting about bubbles in general, it’s all about these expectations that are created, right, versus what actually comes to fruition. And so definitely I think the media aspects and the hype aspect, if that was not really present and instead we talked about a specific solutions like, “Hey, this neural network,” which honestly I would love to call a multilayered regression instead, more what it actually does. It’s like, “Hey, it does really good at this task, this task, and this task. Let’s see if it can do some other tasks well too.” And rather than say, “Oh wow, this thing is acting so intelligently. Is it going to do be Skynet next?”

Thomas: 07:59 And whenever I see certain articles, I think I cited this in my own, but when Deep Mind, Google, they come out with an algorithm that for instance, they created a better chess playing algorithm that was “more human like”, and they threw an immense amount of resources just to create a better chess playing algorithm, and then the media, specifically Yahoo, reacts with, “Oh my gosh, this is basically replicating human intelligence. Everybody run for cover.”

Thomas: 08:30 But really, in the end, it’s just a better chess playing algorithm that immense amount of resources were thrown at. And same with their other project, Alpha Star, which is playing the computer game Starcraft, it’s still another game, and games really seem to be attract AI researchers a lot, because it’s very, the world inside a game is very limited. There’s a very limited number of decisions to make and there’s very limited outcomes. And the real world doesn’t necessarily work that way. So definitely.

Eric: 09:04 It’s interesting, I was watching a video with two robots playing soccer. I don’t know if you’ve seen this yet, but one robot-

Thomas: 09:11 I think I’ve seen that, yeah.

Eric: 09:11 Yeah, one robot kicks the ball and the other robot looks at the ball, sees it, but can’t take a step to it, and literally falls over to try and block it and misses it. And it’s an interesting dynamic because it really does showcase a little bit of that human aspect, that perception and those feelings of what do I do next, that we’re not there yet.

Eric: 09:32 But on that point, let me ask you about a couple of other topics that I’ve read about that challenge things a little bit. So you mentioned early on the idea of verifying outcomes. And I was reading about GANs, Generative Adversarial Networks, which is really posing AI against AI to continue to challenge each other nonstop, to be able to come to the best solution, and then adding on to that the aspect of, as we’re getting closer to having quantum computing be a reality, which I think we’re relatively close, I know it’s still in proof of concept a little bit, but that kind of jump in computing power, that kind and we’re seeing energy abilities become cheaper and more powerful. Do you think that that will leap forward to a point where we won’t see that winter? Or do you think that, are you challenging those kinds of things as well?

Thomas: 10:32 I was thinking about this the other day, actually, and, like I said, I do think an AI winter is going to happen at some point. The Generative Adversarial Networks, they are certainly fascinating, and I think that will give the the movement more steam for a while. Although, at the same time, businesses are going to start saying, probably around this year, “Okay, so when is my investment going to start paying off?” But these new models certainly are interesting. And the Generative Adversarial Networks, I’ve heard that concept, it’s used commonly for image generation, I think Deep Fake is a very cited example of what the adversarial networks can do, which is cool and unsettling at the same time.

Eric: 11:11 Right.

Thomas: 11:15 But they do use that, those two algorithms set against each other, to help improve each other. And I think the chess playing algorithm I mentioned earlier did something similar, where it just kept playing itself over and over again to generate all this data, that now it can start fitting against all of this mass amount of data it had and it simulated. So it’s that paradigm, I think that will be an interesting, I think that’ll be an interesting area of research.

Thomas: 11:39 And you also mentioned the quantum computing, and I’m curious to see what is the outcome of that. But my intuition tells me that even though we will get more data, even though we will get more computing power, it doesn’t change the nature of the problem. And I think in the end, heuristics, which is putting a human hand in how an algorithm works, is still going to yield the best performing things, regardless of how much computing power you have or the bandwidth it has. It may extend the AI winter, though. I may extend this AI hype cycle for a while, or it might just start tempering down.

Eric: 12:32 What if I flip the tables on you a little bit and say, with your knowledge of AI and what’s going on, as a consumer, what are your fears?

Thomas: 12:43 I definitely do have certain fears and it’s not, like you said, it has nothing to do with a Skynet and we become, we get robot overlords. That is not it at all. But I think there are legitimate reasons to fear and dangers to all the different AI algorithms that are out there. For instance, I mentioned the Deep Fake earlier and what the Deep Fake does essentially is allows you to create very convincing photos or video of somebody and superimpose them doing something, like for instance, if somebody took a politician and showed them robbing a bank, they could doctor that and make that look like it was actually them.

Thomas: 13:26 Then there’s, this is one thing I’ve thought about as well. There’s also the bots. There’s also a lot of bots pretending to be people, paired with being able to generate images of people that are fake but they look real. That is definitely a concern.

Thomas: 13:40 And then things like Google Duplex. The first thing I thought when I saw a Google Duplex is, a system which is basically, you have a bot go and make a phone call for you to make an appointment. And it sounds like a person in that it converses with whoever it’s talking to. The thing that bothered me about that is, let’s say I wanted to, let’s say that there’s some business or some entity that I want to troll massively and harass. I could just have a thousand bot callers, calling them all day. And then they think that the entire population is mad at them even though that’s not the case. And the difference in the theme is with the whole Skynet thing versus this is there is some person behind the curtain steering all of this, and using these tools for very nefarious purposes. So I think that is my greatest fear as a consumer.

Eric: 14:29 It’s a really interesting play that you, as you just mentioned, behind all of what we currently see is human interaction. That’s where we’re getting the bias in AI, and many of those fears.

Thomas: 14:40 Yeah. And it’s also interesting to see too. If you ever hear a social media company and they massively censor something and when people confront them about it, it’s like, “Why was that censored?” And they’ll say, “Oh, well that was just the algorithm. It just decided to do that.”

Eric: 14:59 Right.

Thomas: 14:59 I’m sure there might be legitimate cases, but again, more than likely, my suspicion is that there’s a human hand guiding all of that and they just use the algorithm a scapegoat, as if it’s autonomous and independent.

Eric: 15:11 Sure.

Thomas: 15:12 So, yeah, I totally agree.

Eric: 15:14 If you were the global consultant on AI, how would you recommend organizations look forward at that?

Thomas: 15:25 It depends, in the end, the nature of the task. And what the client is interested in and what problems they’re seeking to solve. If they’re merely looking for opportunities to invest, that’s pretty open ended. And it’s always, when you go into research, it’s always a matter of just seeing what sticks. And sometimes things work out, sometimes they don’t, but if they’re trying to solve something specific, it will be addressing, “Okay this is the current state, this is what people are using to solve this kind of problem.” So it depends on the task. It depends on what the client’s looking for.

Eric: 15:59 So, really interesting conversation, Thomas. Is there anything else that you’d like to offer that we haven’t discussed, haven’t talked about, or haven’t touched base on?

Thomas: 16:09 I think this was a fun podcast, and thank you by the way. Regardless of when an AI winter will happen, because this cycle does seem to happen in the past and if precedent says anything, it’s probably going to happen again. We just don’t know when. The best thing you can do in the end is just focus on what problems you’re trying to solve. Just avoid saying, “Oh, we need to use AI just to stay competitive.” And it’s like, “Okay, but what are you trying? What problems are you trying to solve?” It’s good to have that moment of introspection, to say, “What are the greatest threats to my business today and what kinds of automation or what kinds of models or algorithms would probably best solve that problem?”

Thomas: 16:48 And usually, I have found, taking the problem first approach rather than the solution first approach, makes a immense difference in learning and finding what’s effective. And that’s a great way to be productive. And also in the event of an AI winter does happen, it’s a great way to protect yourself from it, from being burned by having way too high expectations in something that doesn’t even pan out and you have to write off a bunch of sunk costs.

Eric: 17:23 Well, hey, Thomas, thank you so much for joining us today. It’s been a pleasure.

Thomas: 17:27 Awesome. And thank you very much for having me. It was fun.

Eric: 17:33 We hope you’ve enjoyed this episode of the Finance Frontier. And in our next interview, we’ll be continuing to explore the topic of artificial intelligence. So tune in every other Wednesday for new episodes. And until next time, subscribe on your favorite podcast app. Since we depend on listeners like you to help us spread the word, we’d love it if you take the time and post a review of our podcast on iTunes. Until next time, I’m your host, Eric Hathaway.


Love the show? Want to be featured as a guest? We’d love to hear your questions and comments and welcome guest recommendations. Our producer Sara Tatnall can be reached at sara.tatnall [at]

Share This