Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence: How to measure the I in AI – TechTalks

Image credit: Depositphotos

This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMinds artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.

With the debut of AI in Go games, Ive realized that Im not at the top even if I become the number one through frantic efforts, Lee told theYonhap news agency. Even if I become the number one, there is an entity that cannot be defeated.

Predictably, Se-dols comments quickly made the rounds across prominent tech publications, some of them using sensational headlines with AI dominance themes.

Since the dawn of AI, games have been one of the main benchmarks to evaluate the efficiency of algorithms. And thanks to advances in deep learning and reinforcement learning, AI researchers are creating programs that can master very complicated games and beat the most seasoned players across the world. Uninformed analysts have been picking up on these successes to suggest that AI is becoming smarter than humans.

But at the same time, contemporary AI fails miserably at some of the most basic that every human can perform.

This begs the question, does mastering a game prove anything? And if not, how can you measure the level of intelligence of an AI system?

Take the following example. In the picture below, youre presented with three problems and their solution. Theres also a fourth task that hasnt been solved. Can you guess the solution?

Youre probably going to think that its very easy. Youll also be able to solve different variations of the same problem with multiple walls, and multiple lines, and lines of different colors, just by seeing these three examples. But currently, theres no AI system, including the ones being developed at the most prestigious research labs, that can learn to solve such a problem with so few examples.

The above example is from The Measure of Intelligence, a paper by Franois Chollet, the creator of Keras deep learning library. Chollet published this paper a few weeks before Le-sedol declared his retirement. In it, he provided many important guidelines on understanding and measuring intelligence.

Ironically, Chollets paper did not receive a fraction of the attention it needs. Unfortunately, the media is more interested in covering exciting AI news that gets more clicks. The 62-page paper contains a lot of invaluable information and is a must-read for anyone who wants to understand the state of AI beyond the hype and sensation.

But I will do my best to summarize the key recommendations Chollet makes on measuring AI systems and comparing their performance to that of human intelligence.

The contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games, Chollet writes, adding that solely measuring skill at any given task falls short of measuring intelligence.

In fact, the obsession with optimizing AI algorithms for specific tasks has entrenched the community in narrow AI. As a result, work in AI has drifted away from the original vision of developing thinking machines that possess intelligence comparable to that of humans.

Although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limitations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose themselves to deal with novel tasks without significant involvement from human researchers, Chollet notes in the paper.

Chollets observations are in line with those made by other scientists on the limitations and challenges of deep learning systems. These limitations manifest themselves in many ways:

Heres an example: OpenAIs Dota-playing neural networks needed 45,000 years worth of gameplay to reach a professional level. The AI is also limited in the number of characters it can play, and the slightest change to the game rules will result in a sudden drop in its performance.

The same can be seen in other fields, such as self-driving cars. Despite millions of hours of road experience, the AI algorithms that power autonomous vehicles can make stupid mistakes, such as crashing into lane dividers or parked firetrucks.

One of the key challenges that the AI community has struggled with is defining intelligence. Scientists have debated for decades on providing a clear definition that allows us to evaluate AI systems and determine what is intelligent or not.

Chollet borrows the definition by DeepMind cofounder Shane Legg and AI scientist Marcus Hutter: Intelligence measures an agents ability to achieve goals in a wide range of environments.

Key here is achieve goals and wide range of environments. Most current AI systems are pretty good at the first part, which is to achieve very specific goals, but bad at doing so in a wide range of environments. For instance, an AI system that can detect and classify objects in images will not be able to perform some other related task, such as drawing images of objects.

Chollet then examines the two dominant approaches in creating intelligence systems: symbolic AI and machine learning.

Early generations of AI research focused on symbolic AI, which involves creating an explicit representation of knowledge and behavior in computer programs. This approach requires human engineers to meticulously write the rules that define the behavior of an AI agent.

It was then widely accepted within the AI community that the problem of intelligence would be solved if only we could encode human skills into formal rules and encode human knowledge into explicit databases, Chollet observes.

But rather than being intelligent by themselves, these symbolic AI systems manifest the intelligence of their creators in creating complicated programs that can solve specific tasks.

The second approach, machine learning systems, is based on providing the AI model with data from the problem space and letting it develop its own behavior. The most successful machine learning structure so far is artificial neural networks, which are complex mathematical functions that can create complex mappings between inputs and outputs.

For instance, instead of manually coding the rules for detecting cancer in x-ray slides, you feed a neural network with many slides annotated with their outcomes, a process called training. The AI examines the data and develops a mathematical model that represents the common traits of cancer patterns. It can then process new slides and outputs how likely it is that the patients have cancer.

Advances in neural networks and deep learning have enabled AI scientists to tackle many tasks that were previously very difficult or impossible with classic AI, such as natural language processing, computer vision and speech recognition.

Neural networkbased models, also known as connectionist AI, are named after their biological counterparts. They are based on the idea that the mind is a blank slate (tabula rasa) that turns experience (data) into behavior. Therefore, the general trend in deep learning has become to solve problems by creating bigger neural networks and providing them with more training data to improve their accuracy.

Chollet rejects both approaches because none of them has been able to create generalized AI that is flexible and fluid like the human mind.

We see the world through the lens of the tools we are most familiar with. Today, it is increasingly apparent that both of these views of the nature of human intelligenceeither a collection of special-purpose programs or a general-purpose Tabula Rasaare likely incorrect, he writes.

Truly intelligent systems should be able to develop higher-level skills that can span across many tasks. For instance, an AI program that masters Quake 3 should be able to play other first-person shooter games at a decent level. Unfortunately, the best that current AI systems achieve is local generalization, a limited maneuver room within their own narrow domain.

In his paper, Chollet argues that the generalization or generalization power for any AI system is its ability to handle situations (or tasks) that differ from previously encountered situations.

Interestingly, this is a missing component of both symbolic and connectionist AI. The former requires engineers to explicitly define its behavioral boundary and the latter requires examples that outline its problem-solving domain.

Chollet also goes further and speaks of developer-aware generalization, which is the ability of an AI system to handle situations that neither the system nor the developer of the system have encountered before.

This is the kind of flexibility you would expect from a robo-butler that could perform various chores inside a home without having explicit instructions or training data on them. An example is Steve Wozniaks famous coffee test, in which a robot would enter a random house and make coffee without knowing in advance the layout of the home or the appliances it contains.

Elsewhere in the paper, Chollet makes it clear that AI systems that cheat their way toward their goal by leveraging priors (rules) and experience (data) are not intelligent. For instance, consider Stockfish, the best rule-base chess-playing program. Stockfish, an open-source project, is the result of contributions from thousands of developers who have created and fine-tuned tens of thousands of rules. A neural networkbased example is AlphaZero, the multi-purpose AI that has conquered several board games by playing them millions of times against itself.

Both systems have been optimized to perform a specific task by making use of resources that are beyond the capacity of the human mind. The brightest human cant memorize tens of thousands of chess rules. Likewise, no human can play millions of chess games in a lifetime.

Solving any given task with beyond-human level performance by leveraging either unlimited priors or unlimited data does not bring us any closer to broad AI or general AI, whether the task is chess, football, or any e-sport, Chollet notes.

This is why its totally wrong to compare Deep Blue, Alpha Zero, AlphaStar or any other game-playing AI with human intelligence.

Likewise, other AI models, such as Aristo, the program that can pass an eighth-grade science test, does not possess the same knowledge as a middle school student. It owes its supposed scientific abilities to the huge corpora of knowledge it was trained on, not its understanding of the world of science.

(Note: Some AI researchers, such as computer scientist Rich Sutton, believe that the true direction for artificial intelligence research should be methods that can scale with the availability of data and compute resources.)

In the paper, Chollet presents the Abstraction Reasoning Corpus (ARC), a dataset intended to evaluate the efficiency of AI systems and compare their performance with that of human intelligence. ARC is a set of problem-solving tasks that tailored for both AI and humans.

One of the key ideas behind ARC is to level the playing ground between humans and AI. It is designed so that humans cant take advantage of their vast background knowledge of the world to outmaneuver the AI. For instance, it doesnt involve language-related problems, which AI systems have historically struggled with.

On the other hand, its also designed in a way that prevents the AI (and its developers) from cheating their way to success. The system does not provide access to vast amounts of training data. As in the example shown at the beginning of this article, each concept is presented with a handful of examples.

The AI developers must build a system that can handle various concepts such as object cohesion, object persistence, and object influence. The AI system must also learn to perform tasks such as scaling, drawing, connecting points, rotating and translating.

Also, the test dataset, the problems that are meant to evaluate the intelligence of the developed system, are designed in a way that prevents developers from solving the tasks in advance and hard-coding their solution in the program. Optimizing for evaluation sets is a popular cheating method in data science and machine learning competitions.

According to Chollet, ARC only assesses a general form of fluid intelligence, with a focus on reasoning and abstraction. This means that the test favors program synthesis, the subfield of AI that involves generating programs that satisfy high-level specifications. This approach is in contrast with current trends in AI, which are inclined toward creating programs that are optimized for a limited set of tasks (e.g., playing a single game).

In his experiments with ARC, Chollet has found that humans can fully solve ARC tests. But current AI systems struggle with the same tasks. To the best of our knowledge, ARC does not appear to be approachable by any existing machine learning technique (including Deep Learning), due to its focus on broad generalization and few-shot learning, Chollet notes.

While ARC is a work in progress, it can become a promising benchmark to test the level of progress toward human-level AI. We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence, Chollet observes.

Read this article:

Artificial intelligence: How to measure the I in AI - TechTalks

Artificial Intelligence Market in the US Education Sector 2018-2022 | Increased Emphasis on Chatbots to Boost Growth | Technavio – Business Wire

LONDON--(BUSINESS WIRE)--The artificial intelligence market in the US education sector is expected to post a CAGR of nearly 48% during the period 2018-2022, according to the latest market research report by Technavio. Request a free sample report

The increasing emphasis on customized learning paths using AI will be one of the major drivers in the global artificial intelligence market in the US education sector. The education system of the US is well developed and teachers and students in the country are aware about AI technology. This increases the adoption of artificial intelligence in the education sectors of the US. Moreover, the growing reliance on machine learning technologies for the collection of data about student performance will contribute to expanding the artificial intelligence market in the US education sector. Also, the availability of advanced AI-based content delivery software at affordable prices in the US will boost the market growth during the forecast period.

To learn more about the global trends impacting the future of market research, download free sample: https://www.technavio.com/talk-to-us?report=IRTNTR22412

As per Technavio, the increased emphasis on chatbots, will have a positive impact on the market and contribute to its growth significantly over the forecast period. This research report also analyzes other important trends and market drivers that will affect market growth over 2018-2022.

Artificial Intelligence Market in the US Education Sector: Increased Emphasis on Chatbots

The increased emphasis on chatbots will be one of the critical trends of the artificial intelligence market in the US education sector. Chatbots are increasingly being used by schools and colleges in the US. Chatbots use AI, ML, and deep learning technologies, to store, process, and communicate data to students. Moreover, chatbots have the capability of performing multiple functions, including conversations with students and answering queries. They can perform a diverse set of tasks and can also be used to evaluate, and correct assessments submitted by students. As the scope of chatbots is increasing, research on the applicability of chatbots is creating new opportunities for vendors, which will propel the market growth during the forecast period.

The rising focus on content analytics and the increasing crowdsourced tutoring are some other major factors that will boost market growth during the forecast period, says a senior analyst at Technavio.

Register for a free trial today and gain instant access to 17,000+ market research reports.

Technavios SUBSCRIPTION platform

Artificial Intelligence Market in the US Education Sector: Segmentation Analysis

This market research report segments the artificial intelligence market in the US education sector by education model (learner model, pedagogical model, and domain model) and end-user (higher education sector and K-12 sector).

The learner model will witness the highest incremental growth during the forecast period of 2018-2022. However, the higher education sector will account for the largest market share due to student knowledge in the use of modern technology.

Technavios sample reports are free of charge and contain multiple sections of the report, such as the market size and forecast, drivers, challenges, trends, and more.

Request a free sample report

Some of the key topics covered in the report include:

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation

Geographical Segmentation

Market Drivers

Market Challenges

Market Trends

Vendor Landscape

About Technavio

Technavio is a leading global technology research and advisory company. Their research and analysis focuses on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions.

With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

If you are interested in more information, please contact our media team at media@technavio.com.

The rest is here:

Artificial Intelligence Market in the US Education Sector 2018-2022 | Increased Emphasis on Chatbots to Boost Growth | Technavio - Business Wire

China Will Outpace US Artificial Intelligence Capabilities, But Will It Win The Race? Not If We Care About Freedom – Forbes

Bottom view of the famous Statue of Liberty, icon of freedom and of the United States. Red and ... [+] purple duotone effect

Weve all heard that China is preparing itself to outpace not only the United States but every global economy in Artificial Intelligence (AI). China is graduating more than 2-3x the amount of engineers each year than any other nation, the government is investing to accelerate AI initiatives, and, according to Kai-Fu Lee in a recent Frontline documentary, China is now producing more than 10x more data than the United States. And if data is the new oil, then China, according to Lee, has become the new Saudi Arabia.

Its clear that China is taking the steps necessary to lead the world in AI. The question that needs to be asked is will they win the race?

According to the Frontline documentary, Chinas goal is to catch up to the United States by 2025 and lead the world by 2030. If things stay the way they are, I do believe China will outpace the United States in technical capabilities, available talent, and data (if theyre not already). However, I also believe that eventually, the Chinese system will either implode, not be adopted outside of China, or both. Why? Let me explain.

A recent report from Freedom House shows that freedom on the internet is declining, and has been for quite some time. Study after study shows that when we know were being surveilled, our behaviors change. Paranoia creeps in. The comfort of being ourselves is lost. And, ultimately, society is corralled into a state of learned helplessness where, like dogs with shock collars, our invisible limits are not clearly understood or defined but learned over time through pain and fear. This has been shown to lead to systemic mental illness ranging from mass depression to symptoms of PTSD and beyond.

Not so ironically, were seeing a realization of these impacts within society, especially among the tech-literate, younger generations. A recent study from Axios found that those age 18-34 are least likely to believe "It's appropriate for an employer to routinely monitor employees using technology" and most likely to "change their behavior if they know their employer was monitoring them." A deeper impact of this type of surveillancewhat Edward Snowden has deemed our Permanent Recordcan be read about in a recent New York Times article about Cancel Culture within teens. People, especially the younger generations, dont want to be surveilled or to have their past mistakes held against them in perpetuity. And if theyre forced into it, theyll find ways around it.

In the Freedom House report, China is listed as one of the worst nations in the world for this type of behavior. This is also spoken to in the Frontline documentary where it was mentioned that Chinas powerful new social credit score is changing behavior by operationalizing an Orwellian future. Some places in China have gone as far as requiring facial recognition to get access to toilet paper in public restrooms. Is this the life we want to live?

If the system continues this way, people will change their behavior. They will game the system. They will put their devices down and do things offline when they don't want to be tracked. They will enter false information to spoof the algorithms when they're forced to give up information. They will wear masks or create new technologies to hide from facial recognition systems. In short, they will do everything possible to not be surveilled. And in doing so, they will provide mass amounts of low-quality, if not entirely false, data, poisoning the overarching system.

If China continues on its current path of forced compliance through mass surveillance, the country will poison its own data pool. This will lead to a brittle AI system that only works for compliant Chinese citizens. Over time, their system will cripple.

Great AI requires a lot of data, yes, but the best AI will be trained on the diversity of life. Its data will include dissenting opinions. It will learn from, adapt to, and unconditionally support outliers. It will form to, and Shepard, the communities it beholds, not the other way around. And if it does not, those of us living in a democratic world will push back. No sane, democratic society will adopt such a system unless forced into it through predatory economic activity, war, or both. We are already seeing an uprising against the surveillance systems in Europe and the United States, where privacy concerns are becoming mainstream news and policies are now being put into place to protect the people, whether tech companies like it or not.

If our democratic societies decide to go down the same path as China because theyre afraid we wont keep up with societies that dont regulate, then were all bound to lose. A race to the bottoma race to degrade privacy and abuse humanity in favor of profit and strategic dominanceis not one we will win. Nor is it a race we should care to win. Our work must remain focused on human rights and democratic processes if we hope to win. It can not come down to an assault on humanity in the form of pure logic and numbers. If it does, we, as well as our democratic societies, will lose.

So whats the moral of this story? China will outpace the United States in Artificial Intelligence capabilities. But will it win the race? Not if we care about freedom.

Original post:

China Will Outpace US Artificial Intelligence Capabilities, But Will It Win The Race? Not If We Care About Freedom - Forbes

Should the EU embrace artificial intelligence, or fear it? – EURACTIV

As Ursula von der Leyen took office as the new President of the European Commission this week, she said her administration will prioritise two issues above all: guiding Europe through the energy transition in response to climate change, and guiding it through the digital transition in response to new technologies.

On the latter, she has her work cut out. Digitalisation is making things possible that were unthinkable even a generation ago, she told the European Parliament ahead of her approval last week.

To grasp the opportunities and to address the dangers that are out there, we must be able to strike a smart balance where the market cannot. We must protect our European wellbeing and our European values.In the digital age, we must continue on our European path.

Many MEPs understand that the European path wants to be in contrast to that of America, which has had a light-touch approach to regulating the internet and digital technology. Brussels has stepped into that regulatory vacuum with laws and standards such as the General Data Protection Regulation that are now affecting the whole world.

At a EURACTIV event on Friday in Brussels, experts were divided over how aggressive von der Leyen should be in regulating artificial intelligence and data usage in order to protect European citizens.

We need to be careful when we set the regulation so it doesnt stifle innovation, said Kristof Terryn, group chief operating officer at Zurich Insurance. The insurance industry is becoming heavily involved in artificial intelligence, where it is using algorithms in many areas, including loss prediction and claims handling.

This innovation could be stifled if EU regulation becomes too onerous. We keep talking about the risks of AI, but there are massive benefits as well, he said. As an example, he pointed to new technology in Japan which can automatically and immediately compensate people affected by an earthquake.

The EU should be careful that it only regulates the applications of AI that are deemed risky, not all of it, he said.

Eline Chivot, a senior policy analyst at the Centre for Data Innovation, agreed. There are worrying signals that the EU is falling behind the US and China in technology development only 20% of European SMEs demonstrate digital intensity, she said. Ethical discussions shouldnt sidetrack us from the competitiveness discussion.

But Jennifer Baker, a digital rights activist and EU privacy policy correspondent for IAPP, said for her, civil rights and data protection are far more important than companies profits.

She said there are major questions not being tackled, which are already affecting citizens without their knowledge, such as who owns the data and how AI systems process it, and how bias and discrimination are being embedded into these systems.

We dont even know how many of these things are already out there because theres been a rush to make profits, she noted.

Wojciech Wiewiorowski, the acting European Data Protection Supervisor, said that his office is taking data protection very seriously and that it will be essential that the EU defines exactly what AI is and which types of applications could cause harm.

Transparency is important, he said. If you include bias, we need to know that the system operates that way.

The bias is being built in by humans, said Baker. The best way to combat bias is to eliminate bias from society, she said. Right now we have the chance to tackle the bias before its hardwired into AI.

There have already been reports of AI systems used in policing showing bias against minorities. Questions have arisen about situations where a self-driving car heading for a crash needs to choose which car, biker, or pedestrian, it should swerve into a situation where bias might dictate it hits an older person instead of a younger person.

The reason were talking about bias is because it reflects our own bias as human beings, and thats difficult to accept, said Chivot. But sometimes bias is a good thing. She cited as an example when AI might be used to select vulnerable populations for a medical study or disease treatment.

Some on the panel felt that the EUs efforts to deal with AI specifically so far for instance, a high-level groups guidelines published earlier this year have been underwhelming. Theres not a lot to disagree with there, they say they dont want bias and want human beings at the centre, said Baker.

When you flip these statements on their head, its utterly ridiculous. We want bias? We dont want humans at the centre?

Wiewiorowski said he liked the document but agreed it can be difficult to precisely define what ethics are. I dont know the definition of ethics but I think we can all agree that there is something we can call general ethics.

As Ursula von der Leyen begins her term, technology industry stakeholders are going to be closely watching the Commission to see which direction it will go in regulating these developing technologies and usage of data. But if her speech to the European Parliament last week is anything to go by, it appears she is not afraid to plough through with regulation.

For us, the protection of a persons digital identity is the overriding priority, she told MEPs. We have to have stringent security requirements and a unified European approach.

[Edited by Zoran Radosavljevic and Samuel Stolton]

Excerpt from:

Should the EU embrace artificial intelligence, or fear it? - EURACTIV

How One Texas Entrepreneur Aims to Transform the World With Artificial Intelligence – Texas Monthly

Ben Lamm doesnt sell spoons. Declaring as much is a favorite line of his whenever someone asks what his two-year-old company, Hypergiant, does. What he means is that he doesnt produce anything as uniform and universal as utensils. Were he a purveyor of tableware, he wouldnt have to spend so much of his time customizing products to individual clients or explaining what can be done with them. Everybody knows what spoons are for.

Contrast that with the broadest definition of what Hypergiant does in fact sellartificial intelligence-enabled software and hardwareand youll appreciate Lamms problem. Even many people lacking in technological savvy have heard of AI as a force with the potential to shape much of humanitys futurefor better or worse. Some of those people look to get into business with Hypergiant without any real idea of what it is theyre buying. They just know they want some. Its like the most addictive drug that no ones ever had, says Lamm, who serves as the companys CEO.

Formed in late 2017, Hypergiant is the latest and by far the most ambitious enterprise launched by serial entrepreneur and Austin native Lamm, who previously founded and sold several companies in the realms of e-learning software, art and design technology, and AI-enhanced chatbots. Hypergiant has offices in Dallas, Houston, and Austin, and it has grown from just a couple dozen employees at the time of its official launch to a current staff of more than two hundred. Planning for the opening of a Washington, D.C., office to focus on defense-related opportunities is under way.

At the companys downtown Austin office, all the conference rooms are named for evil AI from pop culture. I met Lamm in Dolores (as in the villainous android on HBOs Westworld). Bearded and wearing his dark, wavy hair long atop a short, stocky frame, the 38-year-old Lamm is given to long answers in which his abundantly active mind has a way of veering from one subject to another without much warning.

The company had just eagerly publicized its Eos Bioreactoressentially a small box in which AI software manages the growth of algae, which naturally removes carbon dioxide from the air. While its just a prototype, Hypergiant has plans to build a commercially sized successor that could hook up to HVAC systems to reduce the carbon footprints of buildings. We discussed that and other ways in which Lamm believes AI will transform our world. Hes not shy about touting his and his companys accomplishments, nor about his goal for one day building Hypergiant into a trillion-dollar enterprise.

Texas Monthly:Hypergiant aims to help its clients gather and analyze vast amounts of data. Youre working on improving the sensory perception of machines. Youre aiming to launch a network of small satellites gathering data from above. And youre looking to empower smart cities, stitching together data from cameras that are increasingly everywhere. So why use resources on building a better bioreactor? Fighting climate change seems like its outside your core, data-focused business.

Ben Lamm: You dont have to be in the algae business or in the ocean business or in the fossil fuel industry to worry about these things. I believe that if you have really smart women and men that work for you, and youre a company that has the ability to invest and create the future, then I think that we have a choice of where we want to spend that time and those resources. For us, and I hope for the world, climate change should be one of those things. Do I think that we are going to solve climate change? No. Do I think that we can be a part in it? I mean, look, we will open-source the plans. And then if everyone just goes and builds their own bioreactorgreat, awesome. We make money in a lot of areas. I dont need to make a trillion dollars off of the bioreactor. Now, it will be a part of our smart cities initiative, because I think you need to be building carbon-negative cities.

TM: Do you think technologys going to save us from climate change?

BL: I believe that people will. If you just get the cities on board, you dont even need to get the states on board. If you go get the big citiesyou go get Miami, Austin, Dallas, Houston, D.C., New York, San Francisco, L.A., Chicago? If you get the cities, we can make a huge impact. We dont need the federal government to mandate climate change or carbon offset tax dollars or whatnot.

TM: Are you what I think of as an AI utopianin the sense of people who see nothing but good that AI is going to do?

BL: Look, Im a realist. The big thing that I believe will cause the most disruption is automation, not AI. This is going to sound terrible, but weve been through this before. Will there be troughs? Yeah. I mean, my last business, we were a conversational intelligence platform, and people were like, Youre going to get rid of all these peoples jobs at call centers. We were like, But those jobs should never have existed. A call center agent is basically a biological natural language processor. They listen to the words that another human says, on a phone, they type those words into a script, and then it tells them what to say. Thats a mindless job. I believe that person could be an artist. Or that person could be a space engineer. Or anything. I do think that its going to cause disruption. I dont think were going to turn into Terminator. I dont think that its going to take all the jobs. I think were going to have a duty to humans to re-skill them and retrain them. One of the things I dont love by the way, in AIyouve seen this whole trend where its like, AI-powered art and AI music?

TM: Theyre going to write novels.

BL: I hate that. I fing hate that.

TM: Why is that?

BL: Because I think that we should use these technologies to give humans the data, so that they can make better decisions, informed decisions, and it should automate the shit that we shouldnt be doing. But I think arts is where I draw the line. Why dont we spend that time making AI robots that are solar-powered that go around and clean up the ocean? Theres a lot of other stuff that we could be doing instead of training an AI to paint as well as Monet. People talk a lot about ethical AI: will AI have bias if its trained by all white men?

TM: And it clearly seems like thats been happening.

BL: Yeah, but its also not technologys fault. Its humans fault. Did you see when San Francisco banned facial recognition tech? I think thats dumb.

TM: Why is that?

BL: Because they were like, Heres what Chinas doing with facial recognition tech, which is really bad and evil. China is segmenting humans based on physical facial characteristics, based on an assumption of their religion. Evil, terrible shit. China shouldnt fing do that. No one should do that, but thats bad decisions leveraging that technology. But heres what China is also doing: theyre advancing facial rec tech. Some of the smartest tech minds in San Francisco and the Valley, were like, Oh, well, if other cities are going to ban it because other people are doing bad stuff with it, then were not going to invest in it, and therefore the technologys not getting advanced. So Chinas getting more advanced in that category, because were taking the stance of bad people use technology to do bad things.

TM: But arent you at all sympathetic to the privacy argument? Cameras everywhere, all the time, watching everything that I do?

BL: Look, I probably dont have the right perspective on privacy, right? Theres certain things I dont obviously want anyone to know or people to know. But I think theres a trade-off, right? We want a world where everything just shows up to our house, now even same-day. We want a world where we dont have to do fifty million things to get on an airplane. We also want a world where all of that is safe. Where what comes to our house doesnt blow up, or we dont get in the sky and blow up, right? We want that world, theres trade-offs. I think theres a privacy-to-convenience trade-off. I think thats an individual thing.

TM: The problem is, if that decision effectively gets ceded to the government or large corporations, then I end up not having a choice about it, right?

BL: Yeah, but I mean, drive down the street. Look how many cameras there are. I didnt put those cameras up. Do you have Apple?

TM: I do.

BL: So I get an Amber Alert all the time. There was some news thing I saw that, Amber Alerts hearts and minds are in the right place, but its been very ineffective, to find missing children. If you could find a child that was abducted in minutes, before some god-awful something happens to them, whats the trade-off? Theres trade-offs to privacy, security, and convenience.

TM: Considering the current occupant of the White House, and the recent abuses of power that have come out, think about a government having access to that information, and someone who isnt necessarily ethical or moral has that.

BL: Those people are going to exist. Those regimes, like China. They just kind of do whatever Xi wants there. Those regimes are going to exist, and the technologies are going to exist. Taking a blind eye to the technology is not something I think is a good idea, though.

TM: But you seem less concerned about it than some people. You trust in the goodness of humanity? Is that where this is coming from?

BL: I do. I dont think its naive. I do believe in the goodness of humanity. I do believe in the goodness of tech. I think it all kind of wins in the end.

TM: Does the Texas of the future mean big cities where theres a camera on every corner and cameras throughout every building?

BL: I think thats already existent. I told my wife this a couple of weeks ago. I was on a toll road in Dallas, driving down the toll road. Theres likeI dont know how many feet, but like every twenty yardstheres camera pods. It looks like a little robot alien. So I kind of think thats already out there, right?

TM: Ive noticed you put a lot of energy and time into your brandingyour marketing and your branding and this sort of retro-futuristic aesthetic that you apply to everything. I also saw a magazine article where you said you spent six months on the branding of Hypergiant before launching the company. Why is that so important?

BL: I do care a lot about branding. Nothing goes on our website I dont personally look at or give feedback. Nothing. Nothing goes on social media that I dont see. I think the cultural zeitgeist of an organization should manifest itself in the written word and in the visual implementation of the written word, of who you say you are. Good brands resonate with people. We are not going into meetings where people are like, Whats Hypergiant? People have heard of us. I think part of that is because I think we spent a lot of time and attention to detail.

TM: So why are you doing this in Texas? The sorts of tech youre working in more often comes out of places like Silicon Valley or Boston. Why are you here?

BL: I am a big believer in Texas. I have a house in Austin, a house in Dallas. I was born in Austin. You can build a multibillion dollar company without leaving Texas. Theres just so much opportunity, with the energy center being in Houston, and youve got medical and real estate and finance and other industries as well in Dallas. Im super pro-Texas. I will never live anywhere else.

See more here:

How One Texas Entrepreneur Aims to Transform the World With Artificial Intelligence - Texas Monthly