Archive for the ‘Artificial Intelligence’ Category

A Look Back on the Dartmouth Summer Research Project on … – The Dartmouth

At this convention that took place on campus in the summer of 1956, the term artificial intelligence was coined by scientists.

by Kent Friel | 5/19/23 5:10am

For six weeks in the summer of 1956, a group of scientists convened on Dartmouths campus for the Dartmouth Summer Research Project on Artificial Intelligence. It was at this meeting that the term artificial intelligence, was coined. Decades later, artificial intelligence has made significant advancements. While the recent onset of programs like ChatGPT are changing the artificial intelligence landscape once again, The Dartmouth investigates the history of artificial intelligence on campus.

That initial conference in 1956 paved the way for the future of artificial intelligence in academia, according to Cade Metz, author of the book Genius Makers: the Mavericks who Brought AI to Google, Facebook and the World.

It set the goals for this field, Metz said. The way we think about the technology is because of the way it was framed at that conference.

However, the connection between Dartmouth and the birth of AI is not very well-known, according to some students. DALI Lab outreach chair and developer Jason Pak 24 said that he had heard of the conference, but that he didnt think it was widely discussed in the computer science department.

In general, a lot of CS students dont know a lot about the history of AI at Dartmouth, Pak said. When Im taking CS classes, it is not something that Im actively thinking about.

Even though the connection between Dartmouth and the birth of artificial intelligence is not widely known on campus today, the conferences influence on academic research in AI was far-reaching, Metz said. In fact, four of the conference participants built three of the largest and most influential AI labs at other universities across the country, shifting the nexus of AI research away from Dartmouth.

Conference participants John McCarthy and Marvin Minsky would establish AI labs at Stanford and MIT, respectively, while two other participants, Alan Newell and Hebert Simon, built an AI lab at Carnegie Mellon. Taken together, the labs at MIT, Stanford and Carnegie Mellon drove AI research for decades, Metz said.

Although the conference participants were optimistic, in the following decades, they would not achieve many of the achievements they believed would be possible with AI. Some participants in the conference, for example, believed that a computer would be able to beat any human in chess within just a decade.

The goal was to build a machine that could do what the human brain could do, Metz said. Generally speaking, they didnt think [the development of AI] would take that long.

The conference mostly consisted of brainstorming ideas about how AI should work. However, there was very little written record of the conference, according to computer science professor emeritus Thomas Kurtz, in an interview that is part of the Rauner Special Collections archives.

The conference represented all kinds of disciplines coming together, Metz said. At that point, AI was a field at the intersection of computer science and psychology and it had overlaps with other emerging disciplines, such as neuroscience, he added.

Metz said that after the conference, two camps of AI research emerged. One camp believed in what is called neural networks, mathematical systems that learn skills by analyzing data. The idea of neural networks was based on the concept that machines can learn like the human brain, creating new connections and growing over time by responding to real-world input data.

Some of the conference participants would go on to argue that it wasnt possible for machines to learn on their own. Instead, they believed in what is called symbolic AI.

They felt like you had to build AI rule-by-rule, Metz said. You had to define intelligence yourself; you had to rule-by-rule, line-by-line define how intelligence would work.

Notably, conference participant Marvin Minsky would go on to cast doubt on the neural network idea, particularly after the 1969 publication of Perceptrons, co-authored by Minsky and mathematician Seymour Paper, which Metz said led to a decline in neural network research.

Over the decades, Minsky adapted his ideas about neural networks, according to Joseph Rosen, a surgery professor at Dartmouth Hitchcock Medical Center. Rosen first met Minsky in 1989 and remained a close friend of his until Minskys death in 2016.

Minskys views on neural networks were complex, Rosen said, but his interest in studying AI was driven by a desire to understand human intelligence and how it worked.

Marvin was most interested in how computers and AI could help us better understand ourselves, Rosen said.

In about 2010, however, the neural network idea was proven to be the way forward, Metz said. Neural networks allow artificial intelligence programs to learn tasks on their own, which has driven a current boom in AI research, he added.

Given the boom in research activity around neural networks, some Dartmouth students feel like there is an opportunity for growth in AI-related courses and research opportunities. According to Pak, currently, the computer science department mostly focuses on research areas other than AI. Of the 64 general computer science courses offered every year, only two are related to AI, according to the computer science department website.

A lot of our interests are shaped by the classes we take, Pak said. There is definitely room for more growth in AI-related courses.

There is a high demand for classes related to AI, according to Pak. Despite being a computer science and music double major, he said he could not get into a course called MUS 14.05: Music and Artificial Intelligence because of the demand.

DALI Lab developer and former development lead Samiha Datta 23 said that she is doing her senior thesis on neural language processing, a subfield of AI and machine learning. Datta said that the conference is pretty well-referenced, but she believes that many students do not know much about the specifics.

She added she thinks the department is aware of and trying to improve the lack of courses taught directly related to AI, and that it is more possible to do AI research at Dartmouth now than it would have been a few years ago, due to the recent onboarding of four new professors who do AI research.

I feel lucky to be doing research on AI at the same place where the term was coined, Datta said.

Read the original:
A Look Back on the Dartmouth Summer Research Project on ... - The Dartmouth

Artificial intelligence: Implications for strategic plans Inside INdiana … – Inside INdiana Business

At this moment, many business leaders dont need to understand the intricacies of artificial intelligence (AI) or how to interpret raw analytics to know that they need to invest in AI. The destabilization of the economy, ongoing geopolitical tensions, and the residual impact of the COVID-19 pandemic are just a few of the circumstances that have forced us to let go of our preconceived notions about how the future will most likely evolve.

Strategic planning has always been a crucial aspect of business success, but in todays rapidly changing landscape its more important than ever. Artificial intelligence has the potential to transform the way we approach strategic planning. AI can help companies gather and analyze massive amounts of data, automate processes, and provide valuable insights that help inform decision-making.

Acknowledging the Reality of AI Technologies

AI is no longer a thought-provoking, futuristic conceptit has become an indispensable tool for many companies. One of the key advantages of AI is its ability to generate decisions and assess outcomes based on complex data sets. This makes it particularly attractive for leaders seeking to monitor strategic plans. Additionally, AIs capacity for adapting to new rules and information means that it can continuously improve over time. Incorporating machine learning into existing information management systems can take data processing to the next level, resulting in even greater intelligence and insights.

Reflecting Upon the Nature of Strategic Planning

As companies operate in an increasingly dynamic and ever-changing environment, the traditional approach to strategic planning that relies upon periodic reports is no longer sufficient. Companies need to move beyond legacy plans and assumptions and embrace a more dynamic and data-driven approach to strategic planning. Thats why the use of AI technology continues to gain traction because it can help companies develop, track, and update strategic plans in a more efficient and effective way. In addition, continuously monitoring and updating strategic plans using AI enables companies to remain aligned with business goals throughout the year, instead of being constrained by periodic planning cycles.

Understanding Organizational and Managerial Implications

As we know, AI has the potential to streamline countless repetitive, low-visibility tasks in a variety of business units. By reducing the burden of these tasks, AI empowers employees to focus on higher value-added activities, ultimately driving innovation. Lets consider additional organizational and managerial implications that come with incorporating this technology into developing, monitoring, and updating strategic plans. Here are a few aspects to keep in mind:

Organizational change: The integration of AI into supporting the development, tracking, and updating of strategic plans can require significant changes to the way work is organized and executed. As a result, organizations may need to update job descriptions, provide training, and potentially reorganize or form new teams to fully leverage the benefits of the technology. This is all in addition to securing the talent with the skillsets to deploy IA.

Managerial responsibility: Managers must assume new responsibilities when implementing AI to support strategic plans. While oversight and management of AI systems may be delegated to a unit or department, managers within each department must understand their responsibility for processes and data collection and management. This requires that they understand the technology, if even at the most basic level, and ensure that their teams understand it as well and how it relates to their roles and responsibilities.

Data quality: Given that AI relies on data to make decisions, the quality of the data can have a significant impact on the effectiveness of the technology. Organizations must invest in data management and ensure that data is accurate, complete, secure, and up to date to realize the full potential of AI in strategic planning. This involves organizational investment and managements ability to garner support, implement change, and lead by example.

Creating a Well-Informed Business Strategy

As the business environment continues to experience rapid, and at times unpredictable change, more companies are recognizing the importance of leveraging AI to develop, track, and update their strategic plans. By embracing the applications of this fast-evolving technology, companies can gain a competitive edge by making better informed decisions and keeping up with market dynamics. With the ability to analyze complex data sets and generate insights in real-time, AI provides a powerful tool for developing agile and responsive strategic plans. By continuously monitoring and updating these plans using AI, companies can ensure they remain relevant and aligned with business goals and prevent themselves from falling behind their competitors who have yet to embrace these new technologies.

Tuesday Strongs company, Strong Performance Management, LLC, is approved by the Indiana Professional Licensing Agency as a provider of continuing education for licensed professional engineers. Learn morehere.

Story Continues Below

View original post here:
Artificial intelligence: Implications for strategic plans Inside INdiana ... - Inside INdiana Business

What if artificial intelligence isnt the apocalypse? – EL PAS USA

In just six months, searches for artificial intelligence on Google have multiplied by five. ChatGPT launched on November 30 of 2022 already has tens of millions of users. And Sam Altman, the CEO of OpenAI, the company that created ChatGPT, has already appeared before the United States Congress to explain himself and answer questions about the impact of AI. By comparison, it took Mark Zuckerberg 14 years to go to Washington to talk about the role of Facebook in society.

Altman has oddly been quite blunt about the technology that his firm produces. My worst fears are that we can cause significant harm to the world I think if this technology goes wrong, it can go quite wrong, he said while testifying. However, some analysts have noted that the words about his supposed fears may be carefully calculated, with the intention of encouraging more stringent regulation so as to hinder the rise of competitors to OpenAI, which already occupies the dominant position in the sector.

Heavy and bombastic phrases about the explosion of AI have already spawned their own memes. The term criti-hype created in 2021 to define criticism of a new technology has become popularized, thanks to ChatGPT. A pioneering example of criti-hype was the case of Cambridge Analytica, when the company was accused of harvesting Facebook data to understand and influence the electorate during the 2016 presidential election.

The pinnacle of these statements was the departure of Geoffrey Hinton known as the godfather of AI from Google. He left the company to be able to speak freely about the dangers of AI: From what we know so far about the functioning of the human brain, our learning process is probably less efficient than that of computers, he told EL PAS in an interview after departing from Google.

Meanwhile, the U.K. governments outgoing chief scientific adviser has just said that AI could be as big as the Industrial Revolution was. There are already groups trying to organize, so that their trades are not swept away by this technology.

There are too many prophecies and fears about AI to list. But theres also the possibility that the impact of this technology will actually be bearable. What if everything ended up going slower than is predicted, with fewer shake ups in society and the economy? This opinion is valid, but it hasnt been deeply explored amidst all the hype. While its hard to deny the impact of AI in many areas, changing the world isnt so simple. Previous revolutions have profoundly changed our way of life, but humans have managed to adapt without much turbulence. Could AI also end up being a subtle revolution?

At the very least, [AI has caused] a big structural change in what software can do, says Benedict Evans, an independent analyst and former partner at Andreessen Horowitz, one of Silicon Valleys leading venture capital firms. It will probably allow a lot of new things to be possible. This makes people compare it to the iPhone. It could also be more than that: it could be more comparable to the personal computer, or to the graphical user interface, which allows interaction with the computer through the graphical elements on the screen.

These new AI and machine learning (ML) technologies obviously carry a lot of weight in the tech world. My concern is not that AI will replace humans, says Meredith Whittaker, president of Signal, a popular messaging app, but Im deeply concerned that companies will use it to demean and diminish the position of their workers today. The danger is not that AI will do the job of workers: its that the introduction of AI by employers will be used to make these jobs worse, further exacerbating inequality.

It must be noted that the new forms of AI still make a lot of mistakes. Jos Hernndez-Orallo a researcher at the Leverhulme Center for the Future of Intelligence at Cambridge University has been studying these so-called hallucinations for years. At the moment, [AI is] at the level of a know-it-all brother-in-law. But in the future, [it may be] an expert, perhaps knowing more about some subjects than others. This is what causes us anxiety, because we dont yet know in which subjects [the AI] is most reliable, he explains.

Its impossible to build a system that never fails, because well always be asking questions that are more and more complex. At the moment, the systems are capable of the best and the worst Theyre very unpredictable, he adds.

But if this technology isnt so mature, why has it had such a sudden and broad impact in the past few months? There are at least two reasons, says Hernndez-Orallo: first, commercial pressure. The biggest problem comes because there is commercial, media and social pressure for these systems to always respond to something, even when they dont know how. If higher thresholds were set, these systems would fail less, but they would almost always answer I dont know, because there are thousands of ways to summarize a text.

The second reason, he notes, is human perception: We have the impression that an AI system must be 100% correct, like a mixture of a calculator and an encyclopedia. But this isnt the case. For language models, generating a plausible but false text is easy. The same happens with audio, video, code. Humans do it all the time, too. Its especially evident in children, who respond with phrases that sound good, but may not make sense. With kids, we just tell them thats funny, but we dont go to the pediatrician and say that my son hallucinates a lot. In the case of both children and certain types of AI, [there is an ability] to imitate things as best as possible, he explains.

The large impact on the labor market will fade when its clear that there are things that the AI doesnt properly complete. Similarly, when the AI is questioned and we are unsure of the answer it offers, disillusionment will set in. For instance, if a student asks a chatbot about a specific book that they havent read, it may be difficult for them to determine if the synopsis is completely reliable. In some cases, even a margin of doubt will be unacceptable. Its likely that, in the future, humans using AI will even assume (and accept) that the technology will make certain errors. But with all the hype, we havent reached that stage yet.

The long-term realization of AIs limited impact still doesnt mean that the main fear that AI is more advanced than human intelligence will go away. In the collective imagination, this fear becomes akin to the concept of a machine taking control of the worlds software and destroying humans.

People use this concept for everything, Hernndez-Orallo shrugs. The questions [that really need to be asked when thinking about] a general-purpose system like GPT-4 are: how much capacity does it have? Does it need to be more powerful than a human being? And what kind of human being an average one, the smartest one? What tasks is it specifically going to be used for? All of [the answers to these questions] are very poorly defined at this point.

Matt Beane, a professor at UC Santa Barbara, opines that since weve imagined machines that can replace us, we now fear them. We have strong evidence that shows how we rely on criticism and fear as well as imagination and assertiveness when it comes to thinking about new technologies.

Fear has been the most recurring emotion when it comes to this issue. We seem to fall into a kind of trance around these [AI] systems, telling these machines about our experiences, says Whittaker. Reflexively, we think that theyre human We begin to assume that theyre listening to us. And if we look at the history of the systems that preceded ChatGPT, its notable that, while these systems were much less sophisticated, the reaction was often the same. People locked themselves in a surrogate intimate relationship with these systems when they used them. And back then just like today the experts were predicting that these systems would soon (always soon, never now) be able to replace humans entirely.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition

Go here to read the rest:
What if artificial intelligence isnt the apocalypse? - EL PAS USA

Adopting Artificial Intelligence: Things Leaders Need to Know – InfoQ.com

Artificial intelligence (AI) can help companies identify new opportunities and products, and stay ahead of the competition. Senior software managers should understand the basics of how this new technology works, why agility is important in developing AI products, and how to hire or train people for new roles.

Zorina Alliata spoke about leading AI change at OOP 2023 Digital.

In recent studies, 57% of companies said they will use AI and ML in the next threeyears, Alliata explained:

Chances are, your company already uses some form of AI or ML. If not, there is a high chance that they will do so in the very near future in order to stay competitive.

Alliata mentioned that AI and ML are increasingly being used in a variety of industries, from movie recommendations to self-driving cars, and are expected to have a major impact on businesses in the coming years.

Software leaders should be able to understand how the delivery of ML models is different from regular software development. To manage the ML development process correctly, it is important to have agility by using a methodology that allows for quick pivots, iterations, and continuous improvement, Alliata said.

According to Alliata, software leaders should be prepared to hire or train for new roles such as data scientist, data engineer, ML engineer. She mentioned that such roles might not yet exist in current software engineering teams, and they require very specific skills.

InfoQ interviewed Zorina Alliata about adopting AI and ML in companies.

InfoQ: Why should companies care about artificial intelligence and machine learning?

Zorina Alliata: AI and ML can help companies to make better decisions, increase efficiency, and reduce costs. With AI and ML they can automate repetitive processes and improve the customer experience significantly.

A few years ago when I had a fender bender with my car, I had to communicate with my insurance company through phone calls, and take time off work to take my car to specific repair shops. Just last year when my teenage son bumped his car in the parking lot, he used his mobile app to communicate with the insurance company right away, upload images of the car damage, get a rental car, and arrange for his car to be dropped off for repairs by a technician. He could see the status of the repairs online, he received automatic reports and his car was delivered at home when fixed. Behind his pleasant experience, there was a lot of AI and ML - image recognition, chatbots, sentiment analysis.

Another thing companies can benefit from is mining insights from data. For example, looking at all your sales data, the algorithms might find patterns that were not previously known. A common use for this is in segmenting and clustering populations in order to better define a focused message. If you can cluster all people with a high propensity to buy a certain type of insurance policy, then your marketing campaigns can be much more effective.

InfoQ: What should senior software managers know about artificial intelligence and machine learning?

Alliata: Let me give you an example. We sometimes do what we call unsupervised learning - that is, we analyse huge quantities of data just to see what patterns we can find. There is no clear variable to optimize, there is no defined end result.

Many years ago, I read about this airline that used unsupervised learning on their data and the machine came back with the following insight: it found that people who were born on a Tuesday were more likely to order vegetarian meals on a flight. This was not a question anyone had posed, or an insight anyone was ready for.

As a software development manager, how do you plan for whatever weird or amazing insight the algorithms will deliver? We just might not even know what we are looking for until later in the project. This is very different from regular software development where we have a very clear outcome stated from the beginning, for example: display all flyers and their meals on a webpage.

InfoQ: What can companies do to prepare themselves for AI adoption?

Alliata: Education comes first. As a leader, you should understand what the benefits of using AI and ML are for your company, and understand a bit about how the technology works. Also, it is your task to communicate and openly discuss how AI will change the work and how it will affect the people in their current jobs.

Having a solid strategy and a solid set of business use cases that will provide real value is a great way to get started, and to use as your message and vision.

Promoting lean budgeting and agile teams will help quickly show value before large investments in AI resources and technology are made.

Establishing a culture of continuous improvement and continuous learning is also necessary. The technology is changing constantly and the development teams need time to keep up with the newest research and innovation.

See the original post here:
Adopting Artificial Intelligence: Things Leaders Need to Know - InfoQ.com

Daniel Schmachtenberger: Artificial Intelligence and The … – Resilience

(Conversation recorded on May 04th, 2023)

Show Summary

On this episode, Daniel Schmachtenberger returns to discuss a surprisingly overlooked risk to our global systems and planetary stability: artificial intelligence. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards (and beyond) numerous planetary boundaries and facing geopolitical risks all with existential consequences. How does artificial intelligence, not only add to these risks, but accelerate the entire dynamic of the metacrisis? What is the role of intelligence vs wisdom on our current global pathway, and can we change course? Does artificial intelligence have a role to play in creating a more stable system or will it be the tipping point that drives our current one out of control?

About Daniel Schmachtenberger

Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue.

The throughline of his interests has to do with ways of improving the health and development of individuals and society, with a virtuous relationship between the two as a goal.

Towards these ends, hes had particular interest in the topics of catastrophic and existential risk, civilization and institutional decay and collapse as well as progress, collective action problems, social organization theories, and the relevant domains in philosophy and science.

Watch on YouTube

Show Notes & Links to Learn More:

PDF Transcript

00:00 Daniel Schmachtenberger info + TGS episodes part 1 and part 2 and part 3 and part 4 + part 5

Overview of Nates story: Animated videos, Economics for the Future Beyond the Superorganism

Daniels recommendations on further AI learning: Eliezer Yudkowsky on Bankless, David Bohm & Krishnamurti Conversations, Iain McGilchrist The Master and His Emissary, Robert Miles Videos on AI

00:03 ChatGPT, AI art and programming, Deep Fakes

04:25 Humans are a social species

05:17 Money is a claim on energy

05:25 Fossil energy is incredibly powerful but finite

05:39 Other non-renewable inputs to the economy

06:44 Money is primarily created through commercial banks but increasingly through central banks

06:52 Interest is not created when money is created

07:50 How AI obscures the truth and hurts social discourse

08:54 How AI affects jobs

09:30 Humans unique problem solving intelligence

11:22 100 million users in 6 weeks for ChatGPT faster adoption than any technology ever

12:31 Cognitive bias Nates work on cognitive bias

16:01 Indigenous genocide and culture destruction, extinct and endangered species

23:06 Unabomber critique of the advancement of technology

23:21 Indigenous perspectives that resist the adoption of certain tech

24:34 Genghis Khan, Alexander the Great

26:35 Adoption of the plow and loss of animism

28:56 Antonio Turiel + TGS Podcast

31:40 Humans long history of environmental destruction

32:45 We are hitting planetary boundaries everywhere

34:20 Facebooks advertising algorithms adverse societal effects

36:25 Golden Retrievers co-evolution with humans

39:32 Jevons Paradox

40:05 Since 1990s weve increased efficiency 36% but increased 63% increase in energy use

41:02 Orders of effects

45:50 Maximum Power Principle

47:32 There are lots of different types of intelligence

48:20 Other hominids

53:09 Human ability to have abstractions of time and space

54:38 Laozi Tao Te Ching

57:14 Studies showing people dying of obesity are dying of nutrient deficiency

1:02:15 Co-selecting factors of evolution homeodynamics

1:04:30 Tyson Yunkaporta

1:05:00 Samantha Sweetwater

1:07:23 The Sabbath

1:13:04 Chestertons Fence

1:13:50 Dialectics

1:21:08 E.O. Wilson & David Sloan Wilson Multilevel Selection

1:24:25 Recursive Innovation

1:26:15 Dunbars Number

1:30:09 Hobbesian State of Nature

1:32:10 Humans are not specifically adapted to any particular environment

1:32:25 Neoteny in humans.

1:39:37 Economic Comparative Advantage

1:40:30 Nates 2023 Earth Day Talk

1:43:25 Origins and types of capitalism

1:46:02 Ilya Prigogine

1:46:22 Moloch

1:46:55 Adam Smith Invisible Hand

1:50:32 Eliezer Yudkowsky

1:50:35 Nick Bostsrom

1:51:05 AI systems prowess at chess and other military strategy games

1:54:03 Swarming algorithms and AI regulation of flight patterns

1:56:40 Humans lack of intuition for exponential curves

2:04:04 WarGames

2:04:06 Mutually Assured Destruction

2:05:40 Open AI

2:06:12 Anthropic

2:06:52 Motivated Reasoning

2:08:38 Technology is Not Values Neutral paper

2:15:45 Unknown unknowns Donald Rumsfeld

2:19:50 Shareholder Value

1:33:25 Energy and resource needs of AI

2:39:15 Eliezer Yudkowsky on Bankless

2:39:40 Machine Intelligence Research Institute

2:49:07 Risk of Generalized Artificial Intelligence

2:59:55 David Bohm & Krishnamurti Conversations

3:02:59 Ian McGilchrist The Master and His Emissary

3:10:01 Robert Miles Videos on AI

Teaser photo credit: CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6533149

More:
Daniel Schmachtenberger: Artificial Intelligence and The ... - Resilience