Media Search:



Paper Claims AI May Be a Civilization-Destroying "Great Filter" – Futurism

If aliens are out there, why haven't they contacted us yet? It may be, a new paper argues, that they or, in the future, we inevitably getwiped out by ultra-strong artificial intelligence, victims of our own drive to create a superior being.

This potential answer to the Fermi paradox in which physicist Enrico Fermi and subsequent generations pose the question: "where is everybody?" comes from National Intelligence University researcher Mark M. Bailey, who in a new yet-to-be-peer-reviewed paper posits that advanced AI may be exactly the kind of catastrophic risk that could wipe out entire civilizations.

Bailey cites superhuman AI as a potential "Great Filter," a potential answer to the Fermi paradox in which some terrible and unknown threat, artificial or natural, wipes out intelligent life before it can make contact with others.

"For anyone concerned with global catastrophic risk, one sobering question remains," Bailey writes. "Is the Great Filter in our past, or is it a challenge that we must still overcome?"

We humans, the researcher notes, are "terrible at intuitively estimating long-term risk," and given how many warnings have already been issued about AI and its potential endpoint, anartificial general intelligence or AGI it's possible, he argues, that we may be summoning our own demise.

"One way to examine the AI problem is through the lens of the second species argument," the paper continues. "This idea considers the possibility that advanced AI will effectively behave as a second intelligent species with whom we will inevitably share this planet. Considering how things went the last time this happened when modern humans and Neanderthals coexisted the potential outcomes are grim."

Even scarier, Bailey notes, is the prospect of near-god-like artificial superintelligence (ASI),in which an AGI surpasses human intelligence because "any AI that can improve its own code would likely be motivated to do so."

"In this scenario, humans would relinquish their position as the dominant intelligent species on the planet with potential calamitous consequences," the author hypothesizes. "Like the Neanderthals, our control over our future, and even our very existence, may end with the introduction of a more intelligent competitor."

There hasn't yet, of course, been any direct evidence to suggest that extraterrestrial AIs wiped out natural life in any alien civilizations, though in Bailey's view, "the discovery of artificial extraterrestrial intelligence without concurrent evidence of a pre-existing biological intelligence would certainly move the needle."

The possibility, of course, raises the possibility that there are destructive AIs lingering around the universe after eliminating their creators. To that end, Bailey helpfully suggests that "actively signaling our existence in a way detectable to such an extraterrestrial AI may not be in our best interest" because "any competitive extraterrestrial AI may be inclined to seek resources elsewhere including Earth."

"While it may seem like science fiction, it is probable that an out-of-control... technology like AI would be a likely candidate for the Great Filter whether organic to our planet, or of extraterrestrial origin," Bailey concludes. "We must ask ourselves; how do we prepare for this possibility?"

Reader, it's freaky stuff but once again, we're glad someone is considering it.

More on an AI apocalypse: Warren Buffett Compares AI to the Atom Bomb

Go here to read the rest:

Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism

Why the Global Push for Decentralization? – Tekedia

Decentralization is a system in which lower-level components operate on local information to accomplish global goals, without a central authority or controller. In contrast, a centralized system is one in which a central entity exercises control over the lower-level components, either directly or through a power hierarchy.

Decentralized systems have many advantages over centralized systems, such as failure tolerance, redundancy, scalability, and autonomy. For example, the Internet is a decentralized system that allows users to communicate and share information across the world, without relying on a single server or authority. However, decentralized systems also have some challenges, such as management complexity, security risks, and coordination difficulties.

In recent years, there has been a growing interest and demand for decentralized systems in various domains, such as computing, information technology, economics, and governance. One of the main drivers of this trend is the emergence of blockchain technologies, such as those used in cryptocurrencies like Bitcoin and Ethereum.

Tekedia Capital Syndicate unveils 8 startups for the current investment cycle; deal room closes May 16to invest in Africas finest startupshere.

Tekedia Mini-MBA (June 5 Sept 2 2023) opens NEW registrations; beat early bird deadline of May 16for BIG discounts by registering here.

Blockchain technologies use cryptography and consensus algorithms to create a distributed ledger of transactions that is verifiable and immutable, without the need for a centralized intermediary or authority. This enables new possibilities for peer-to-peer transactions, smart contracts, digital assets, and decentralized applications.

Another reason for the world push for decentralized systems is the increasing awareness and concern about the drawbacks and dangers of centralization. Centralization can lead to inefficiency, corruption, censorship, surveillance, and abuse of power by the central entity or authority.

For instance, many people are dissatisfied with the centralization of social media platforms, which can manipulate user data, influence public opinion, and censor content that they deem inappropriate or harmful. Decentralized systems can offer more privacy, freedom, and control to the users, by allowing them to choose their own rules and preferences.

Decentralization has some challenges and limitations, some of these include:

Management complexity and coordination: Decentralized systems require more effort and resources to manage and coordinate the components, especially when they are large and diverse. This also increases the risk of conflicts and inconsistencies among the components.

Quality assurance and accountability: Decentralized systems may lack standards and regulations to ensure the quality and reliability of the components. This also makes it harder to monitor and evaluate the performance and behavior of the components, as well as to enforce rules and sanctions.

Scalability and efficiency: Decentralized systems may face difficulties in scaling up or down to meet changing demands and conditions. This also affects the speed and cost of the system, as well as its environmental impact.

Therefore, decentralized systems are not a panacea or a one-size-fits-all solution. They need to be carefully designed and implemented according to the specific context and objectives of each domain and application. They also need to be balanced with centralized systems when appropriate, to optimize their strengths and mitigate their weaknesses.

In conclusion, decentralized systems are systems that operate on local information to achieve global goals, without a central authority or controller. They have many benefits over centralized systems, such as failure tolerance, redundancy, scalability, and autonomy. They also address some of the problems and challenges of centralization, such as inefficiency, corruption, censorship, surveillance, and abuse of power.

Like Loading...

Read more:

Why the Global Push for Decentralization? - Tekedia

People warned AI is becoming like a God and a ‘catastrophe’ is … – UNILAD

An artificial intelligence investor has warned that humanity may need to hit the breaks on AI development, claiming it's becoming 'God-like' and that it could cause 'catastrophe' for us in the not-so-distant future.

Ian Hogarth - who has invested in over 50 AI companies - made an ominous statement on how the constant pursuit of increasingly-smart machines could spell disaster in an essay for the Financial Times.

The AI investor and author claims that researchers are foggy on what's to come and have no real plan for a technology with that level of knowledge.

"They are running towards a finish line without an understanding of what lies on the other side," he warned.

Hogarth shared what he'd recently been told by a machine-learning researcher that 'from now onwards' we are on the verge of artificial general intelligence (AGI) coming to the fore.

AGI has been defined as is an autonomous system that can learn to accomplish any intellectual task that human beings can perform and surpass human capabilities.

Hogarth, co-founder of Plural Platform, said that not everyone agrees that AGI is imminent but rather 'estimates range from a decade to half a century or more' for it to arrive.

However, he noted the tension between companies that are frantically trying to advance AI's capabilities and machine learning experts who fear the end point.

The AI investor also explained that he feared for his four-year-old son and what these massive advances in AI technology might mean for him.

He said: "I gradually shifted from shock to anger.

"It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight."

When considering whether the people in the AGI race were planning to 'slow down' to ' let the rest of the world have a say' Hogarth admitted that it's morphed into a 'them' versus 'us' situation.

Having been a prolific investor in AI startups, he also confessed to feeling 'part of this community'.

Hogarth's descriptions of the potential power of AGI were terrifying as he declared: "A three-letter acronym doesnt capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI."

Hogarth described it as 'a superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it'.

But even with this knowledge and, despite the fact that it's still on the horizon, he warned that we have no idea of the challenges we'll face and the 'nature of the technology means it is exceptionally difficult to predict exactly when we will get there'.

"God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," the investor said.

Despite a career spent investing in and supporting the advancement of AI, Hogarth explained that what made him pause for thought was the fact that 'the contest between a few companies to create God-like AI has rapidly accelerated'.

He continued: "They do not yet know how to pursue their aim safely and have no oversight."

Hogarth still plans to invest in startups that pursue AI responsibly, but explained that the race shows no signs of slowing down.

"Unfortunately, I think the race will continue," he said.

"It will likely take a major misuse event - a catastrophe - to wake up the public and governments."

Follow this link:

People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD

Lido Community Weighing On-Chain Vote to Deploy Version 2 on Ethereum – Yahoo Finance

Lido, the dominant liquid staking platform, is voting to execute its second iteration on the Ethereum blockchain, a pivotal moment for users in the decentralized finance (DeFi) community that want further decentralization and better on and off ramps into Ethereums staking ecosystem.

Lidos Twitter account is calling v2 the most important upgrade to date since its launch in December 2020 as Ethereum is Lidos first and largest market for liquid staking tokens.

With two main focal points, ETH staking withdrawals and the introduction of a Staking Router said to increase participation from a more diverse set of node operators, v2 on Ethereum comes as Lido commands the lead as the largest liquid staking platform in the DeFi space, with $11.77 billion in total value locked across the Ethereum ecosystem, per DefiLlama.

According to a blog post, The implementation of withdrawals coupled with the Staking Router proposal will contribute to an increase in the decentralization of the network, a more healthy Lido protocol, and enable the long-awaited ability to stake and unstake (withdraw) at will, reinforcing stETH as the most composable and useful asset on Ethereum.

The vote ends on May 15. If it passes, Lidos smart contracts will upgrade and v2 will go live.

At press time all participating LDO token holders have voted to deploy the upgrade. LDO, the governance token for Lido, has jumped 16% in the past 24 hours to $1.89, per CoinGecko.

Excerpt from:

Lido Community Weighing On-Chain Vote to Deploy Version 2 on Ethereum - Yahoo Finance

LSE leads the way with new AI Management course – The London School of Economics and Political Science

Please find a Q&A with Dr Aaron Cheng about the new course below:

Can you tell me about the course and the content?

The course title is Managing Artificial Intelligence. As you can tell, its a human-centric approach to AI. I proposed this course as we have seen many courses at our and other Schools worldwide focusing on the technical capability of big data and AI. They help students see the potential of this technology rather than give a hands-on managerial perspective and guidelines for how we manage AI.

For the course, we have 10 lectures to cover both the technicality and management of AI, as well as the social and ethical considerations; balanced to give students different perspectives on AI.

The course is supplemented with nine seminars so students can be exposed to, and engage in, the real-world managerial practices of AI. Among them, we have three case study sessions to cover product development, human-in-the-loop, business model, and global strategy of AI applications in various contexts, such as social media, healthcare, and telecommunication. So its a fascinating line-up of teaching cases to show that AI is real and managing AI is now the priority of many organisations, not something we are envisioning and predicting for the future.

We also have an interesting debate on generative AI, the newest form of AI that can automatically generate content for people to use. We have seen lots of applications around it (e.g., ChatGPT) nowadays. In one of the seminars, students were assigned to five roles employer, university, teachers, students and the AI vendor, and debated the role of this technology in higher education. We wanted to see what kind of issues emerged in this ecosystem, and we did have interesting conversations when students walked in the shoes of different roles. This debate also yielded some regulatory implications for how AI should be managed in the higher education context.

The most exciting task for students is the team project on AI management. Student teams develop present, progress their projects in four seminars by incorporating what they learned in the lectures into their AI projects. Most of the teams start with a pressing business or societal challenge and then develop their start-ups around an AI solution.

Some of the students looked at whether journalism or public relations work can be fully automated and in the end they decided not. One of the teams looks at how predictive analytics can be used to assist university students and teachers to book spaces and make appointments. As you can tell, all of these projects are innovative and can be brought to the market for real, so the students are very excited about that.

Overall, we find that students love the course. Their course learning went along with the rapid changes in the field of AI, especially in the past several months since the beginning of ChatGPT, and many of the technology companies raced against each other to push innovations forward on a daily basis. The field is fascinating, although it creates course design challenges for us to keep up.

Is the course designed for students working for companies coming up with AI projects?

It can be for students who wish to work in any sector that is now embracing this technology. Its important to note that although we need IT developers and data scientists to create AI and data-driven solutions, we need more skilled professionals who know both technology and management to diffuse such innovations.

These professionals are often called business analysts and managers at different levels in an organization who can lead the digital transformation, and they often play a role as middlemen to connect the supply and demand of AI and analytics solutions. Statistics from McKinsey Global Institute showed ten times more of a shortage for managers and analysts who can use their know-how of big data and AI for effective decision-making than that for data scientists or machine learning (ML) engineers who are mainly specialised in programming.

To meet the demand for managerial talents in AI, my course does not focus on teaching students how to design technology but more on how to manage it and lead digital transformation with AI.

It's also important to mention the programme that hosts this course Management Information Systems and Digital Innovation (MISDI) a flagship masters programme for the Information Systems and Innovation Group (ISIG) in the Department of Management (DoM). The faculty expertise in ISIG and course offerings in MISDI are on connecting the technology know-what with business and management know-how to give students an edge and knowledge with this connection.

This is also a student demand driven course. Over the past several years, students in MISDI and other programmes in DoM have developed strong interest in AI issues, and many used topics in AI management for their coursework and dissertations. However, we did not have a specialised course for it.

In other departments at LSE like statistics, there are very good AI and ML courses, but most of them are from the perspectives of statisticians or computer scientists. Since 2021, we have had an LSE100 course how to control AI, which is very well-designed from a social science perspective but only for undergraduate students.

To better meet the needs of masters students studying AI management, we have launched this new course in MISDI to integrate multiple perspectives of AI, focus on the managerial considerations, and give a comprehensive and critical treatment of the automation and augmentation roles of AI for individuals, organizations, and the society at large.

Is the course designed for people interested in business side of AI?

I would say so but want to stress that its a more balanced course that also attracts students whose interests may be beyond business. Another thing important to mention is that the course is situated in a polarised public discourse with diverse views toward AI.

We have seen two camps one camp is held by those who worry about AI and the social and ethical implications of replacing humans in the workplace. The other is a utopian view of AI by those who only advocate the technical capability of AI to extend the capabilities of humans. The latter obviously has a more positive view of AI but sometimes downplays the existential threats for humans themselves especially when AI intensifies inequality among people who do not have the knowledge or skills to manage it.

These two camps are very big now but heavily segregated. I feel that they do not talk to each other in a very productive way, as they often debate using distinct language systems. I believe it is much needed in contemporary society to have effective communication between these camps, and people should know the underlying logic and assumptions of these two camps before they develop beliefs and actions about AI. It should be so, especially for current and future leaders in the private and public sectors. They really need to gain a deep understanding of the potential, promise and perils of AI. They also need to have a sober view of AI hopes and hypes claimed by the two camps.

I hope this course can plant seeds in the deep heart of these students; so, when they develop professional careers as business leaders and social planners, they know what AI is and, more importantly, they take the responsibility to manage AI for a better future for humanity. At the end of the day, we should be able to create strong AI but also create our own humanity and achieve shared prosperity with AI. This is the overarching idea of the course.

What makes this course unique and different?

Let me talk about similar courses and the difference my course makes in AI management education.

I have attended the biggest IT Teaching Workshop in my field (Information Systems) almost every year for the last five years. In the Workshop, teachers from most universities in the United States and Europe present their courses about big data and data analytics, yet I have not seen many specialist courses on AI.

Of course, in the Computer Science community, there are many popular courses about machine learning and data science, but they rarely say that these are AI courses. It is important to note that the concept of AI is not just technical but socio-technical. We need to study and teach the nature and implications of AI by examining its technical properties and also its social contexts. As far as I know, few courses have struck such a balance.

One reason, that most courses focus on the technicality of AI, is obvious. STEM jobs are much better paid than a lot of others. Preparing students for such jobs would help increase the popularity of universities, which further encourages the offering of technical AI or data science courses.

Leading the social science approach in higher education, LSE has its strength in cultivating leaders who can think and navigate social changes, especially the current transformational change led by AI. As such, we offer this new LSE course to situate the debate on AI in the academic and public discourse and approach AI education in a more comprehensive and critical way. We start with the history of AI, we discuss the role of data in making AI, and we unpack the black box of algorithms and issues involved (e.g., opacity, bias, interpretability).

Then we walk students through the socio-technical analysis of AI management at different levels: On the individual level, we assess the role of humans in the loop and when and how human judgment needs to be exercised in designing and using AI. On the organizational level, we analyse the business model, operations, and innovations with and governance of AI. On the societal level, we discuss the ethical concerns and regulatory efforts on managing AI for good. As you can tell, with this approach to AI, students start to think about and raise their own critical questions about AI management in the digital economy.

What made you personally think this course was really needed?

I would like to start with my educational background and then my reading and thinking about AI in the past decade to answer this question.

Starting from my college education 15 years ago, I was in the same discipline, management information systems, and initially, my training was technical and particularly computer science oriented. Then my understanding of technology deepened after I moved to my masters programme and was exposed to a more behavioural perspective on how people interact with technology. Later my PhD training in economic analysis of information technology helps me engage in studying the bigger role of technology in businesses and society.

Now I am a researcher and teacher of information systems and innovation, and LSE really broadens my horizon of the social science approach to technology. During my education journey, AI has been with me for many years, albeit more often in the form of algorithms or machine learning techniques.

AI did not catch a lot of my attention, and I am sure for anyone else, until the booming of the AI field especially when deep learning and generative models were developed and used to create powerful applications, such as deep fakes or ChatGPT. People say nowadays that the era of artificial general intelligence is coming, in contrast to the past decades of artificial narrow intelligence (AI can only serve a small set of pre-specified purposes and automate tasks like ordinary software does).

Over time I realised AI has so much potential to change human life in positive ways. At the same time, people worry about the apocalyptic claim that machines are the end of humanity has reached the all-time high. I think its time for us to seriously think and study how to manage AI.

Teaching AI management is an opportunity for me as a researcher to explore with students the socio-technical nature and implications of AI and how we can be more responsible in designing and deploying AI. I am happy that my students have been excited about this course and really engaged in and benefitted from this journey.

Original post:

LSE leads the way with new AI Management course - The London School of Economics and Political Science