Media Search:



The Cautionary Tale of J. Robert Oppenheimer – Alta Magazine

When Christopher Nolans blockbuster biopic of the theoretical physicist J. Robert Oppenheimer, the so-called father of the atomic bomb, drops in theaters on July 21, moviegoers might be forgiven for wondering, Why now? What relevance could a three-hour drama chronicling the travails and inner torment of the scientist who led the Manhattan Projectthe race to develop the first nuclear weapon before the Germans during World War IIpossibly have for todays 5G generation, which greets each new technological advance with wide-eyed excitement and optimism?

But the film, which focuses on the moral dilemma facing Oppenheimer and his young collaborators as they prepare to unleash the deadliest device ever created by mankind, aware that the world will never be the same in the wake of their invention, eerily mirrors the present moment, as many of us anxiously watch the artificial intelligence doomsday clock countdown. Surely as terrifying as anything in Nolans war epic is the New York Times recent account of OpenAI CEO Sam Altman, sipping sweet wine as he calmly contemplates a radically altered future; boasting that he sees the U.S. effort to build the bomb as a project on the scale of his GPT-4, the awesomely powerful AI system that approaches human-level performance; and adding that it was the level of ambition we aspire to.

This article appears in Issue 24 of Alta Journal. SUBSCRIBE

If Altman, whose company created the chatbot ChatGPT, is troubled by any ethical qualms about his unprecedented artificial intelligence models and their potential impact on our lives and society, he is not losing any sleep over it. He sees too much promise in machine learning to be overly worried about the pitfalls. Large language models, the types of neural network on which ChatGPT is built, enable everything from digital assistants like Siri and Alexa to self-driving cars and computer-generated tweets and term papers. The 37-year-old AI guru thinks its all goodtransformative change. He is busy creating tools that empower humanity and cannot worry about all their applications and outcomes and whether there might be what he calls a downside.

Just this March, in an interview for the podcast On with Kara Swisher, Altman seemed to channel his hero Oppenheimer, asserting that OpenAI had to move forward to exploit this revolutionary technology and that it requires, in our belief, this continual deployment in the world. As with the discovery of nuclear fission, AI has too much momentum and cannot be stopped. The net gain outweighs the dangers. In other words, the market wants what the market wants. Microsoft is gung ho on the AI boom and has invested $13 billion in Altmans technology of the future, which means tools like robot soldiers and facial recognitionbased surveillance systems might be rolled out at record speed.

We have seen such arrogance before, when Oppenheimer quoted from the Hindu scripture the Bhagavad Gita in the shadow of the monstrous mushroom cloud created by the Trinity test explosion in the Jornada Del Muerto Desert, in New Mexico on July 16, 1945: Now I have become Death, destroyer of worlds. No man in history had ever been charged with developing such a powerful scientific weapon, an apparent affront to morality and sanity, that posed a grave threat to civilization yet at the same time proceeded with all due speed on the basis that it was virtually unavoidable. The official line was that it was a military necessity: the United States could not allow the enemy to achieve such a decisive weapon first. The bottom line is that the weapon was devised to be used, it cost upwards of $2 billion, and President Harry Truman and his top advisers had an assortment of strategic reasonshello, Soviet Unionfor deploying it.

Back in the spring of 1945, a prominent group of scientists on the Manhattan Project had voiced their concerns about the postwar implications of atomic energy and the grave social and political problems that might result. Among the most outspoken were the Danish Nobel laureate Niels Bohr, the Hungarian migr physicist Leo Szilard, and the German migr chemist and Nobel winner James Franck. Their mounting fears culminated in the Franck Report, a petition by a group from the projects Chicago laboratory arguing that releasing this indiscriminate destruction upon mankind would be a mistake, sacrificing public support around the world and precipitating a catastrophic arms race.

The Manhattan Project scientists also urged policymakers to carefully consider the questions of what the United States should do if Germany was defeated before the bomb was ready, which seemed likely; whether it should be used against Japan; and, if so, under what circumstances. The way in which nuclear weaponsare first revealed to the world, they noted, appears to be of great, perhaps fateful importance. They proposed performing a technical demonstration and then giving Japan an ultimatum. The writers of the Franck Report wanted to explore what kind of international control of atomic energy and weapons would be feasible and desirable and how a strict inspection policy could be implemented. The shock waves of the Trinity explosion would be felt all over the world, especially in the Soviet Union. The scientists foresaw that the nuclear bomb could not remain a secret weapon at the exclusive disposal of the United States and that it inexorably followed that rogue nations and dictators would use the bomb to achieve their own territorial ambitions, even at the risk of triggering Armageddon.

Fast-forward to the spring of 2023, when more than 1,000 tech experts and leaders, such as Tesla chief Elon Musk, Apple cofounder Steve Wozniak, and entrepreneur and 2020 presidential candidate Andrew Yang, sounded the alarm on the unbridled development of AI technology in a signed letter warning that the AI systems present profound risks to society and humanity. AI developers, they continued, are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no onenot even their creatorscan understand, predict, or reliably control.

The open letter called for a temporary halt to all AI research at labs around the globe until the risks can be better assessed and policymakers can create the appropriate guardrails. There needs to be an immediate pause for at least 6 months, it stated, on the training of AI systems more powerful than GPT-4, which has led to the rapid development and release of imperfect tools that make mistakes, fabricate information unexpectedly (a phenomenon AI researchers have aptly dubbed hallucination), and can be used to spread disinformation and further the grotesque distortion of the internet. This pause, the signatories wrote, should be used to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts, and they urged policymakers to roll out robust AI governance systems. How the letters authors hope to enforce compliance and prevent these tools from falling into the hands of authoritarian governments remains unclear.

Geoffrey Hinton, a pioneering computer scientist who has been called the godfather of AI, did not sign the letter but in May announced that he was leaving Google in order to freely express his concerns about the global AI race. He is worried that the reckless pace of advances in machine superintelligence could pose a serious threat to humanity. Until recently, Hinton thought that it was going to be two to five decades before we had general-purpose AIwith its wide range of possible uses, both intended and unintendedbut the trailblazing work of Google and OpenAI means the ability of AI systems to learn and solve any task with something approaching human cognition looms directly ahead, and in some ways they are already eclipsing the capabilities of the human brain. Look at how it was five years ago and how it is now, Hinton said of AI technology. Take the difference and propagate it forwards. Thats scary.

Until this year, when people asked Hinton how he could work on technology that was potentially dangerous, he would always paraphrase Oppenheimer to the effect that when you see something that is technically sweet, you go ahead and do it. He is not sanguine enough about the future iterations of AI to say that anymore.

Now, as during the Manhattan Project, there are those who argue against any moratorium on development for fear of the United States losing its competitive edge. ExGoogle CEO Eric Schmidt, who has expressed concerns about the possible misuse of AI, does not support a hiatus for the simple reason that it would benefit China. Schmidt is in favor of voluntary regulation, which he has described somewhat lackadaisically as letting the industry try to get its act together. Yet he concedes that the dangers inherent in AI itself may pose a larger threat than any global power struggle. I think the concerns could be understated. Things could be worse than people are saying, he told the Australian Financial Review in April. You have a scenario here where you have these large language models that, as they get bigger, have emergent behavior we dont understand.

If Nolan is true to form, audiences may find the personal dimension of Oppenheimer even more chilling than the IMAX-enhanced depiction of hair-raising explosions. The director has said that he is not interested in the mechanics of the bomb; rather, what fascinates him is the paradoxical and tragic nature of the man himself. Specifically, the movie will examine the toll inventing a weapon of mass destruction takes on an otherwise peaceable, dreamy, poetry-quoting blackboard theoretician, whose only previous brush with conflict was the occasional demonstration on UC Berkeleys leafy campus.

One of the things that would haunt Oppenheimer was his decision, as head of the scientific panel chosen to advise on the use of the bomb, to argue that there was no practical alternative to military use of the weapon. He wrote to Secretary of War Henry Stimson in June 1945 that he did not feel it was the panels place to tell the government what to do with the invention: It is clear that we, as scientific men, have no proprietary rights [and]no claim to special competence in solving the political, social, and military problems which are presented by the advent of atomic power.

Even at the time, Oppenheimer was already in the minority: most of the project scientists argued vehemently that they knew more about the bomb, and had given more thought to its potential dangers, than anyone else. But when Leo Szilard tried to circulate a petition rallying the scientists to present their views to the government, Oppenheimer forbade him to distribute it at Los Alamos.

Universal History Archive

After the two atomic attacks on Japanfirst Hiroshima on August 6 and then, just three days later, Nagasaki on August 9the horror of the mass killings, and of the unanticipated and deadly effects of radiation poisoning, forcefully hit Oppenheimer. In the days and weeks that followed, the brilliant scientific leader who had been drawn to the bomb project by ego and ambition, and who had skillfully helmed the secret military laboratory at Los Alamos in service of his country, was undone by the weight of responsibility for what he had wrought on the world. Within a month of the bombings, Oppenheimer regretted his stand on the role of scientists. He reversed his position and began frantically trying to use his influence and celebrity as the father of the A-bomb to convince the Truman administration of the urgent need for international control of nuclear power and weapons.

The film will almost certainly include the famous, or infamous, scene when Oppenheimer, by then a nervous wreck, burst into the Oval Office and dramatically announced, Mr. President, I feel I have blood on my hands. Truman was furious. I told him, the president said later, the blood was on my handsto let me worry about that. Afterward, Truman, who was struggling with his own misgivings about dropping the bombs and what it would mean for his legacy, would denounce Oppenheimer as that cry-baby scientist.

In the grip of his postwar zealotry, Oppenheimer became an outspoken opponent of nuclear proliferation. He was convinced no good could come of the race for the hydrogen bomb. Just months after the Soviet Unions successful test of an atomic bomb in 1949, he joined other eminent scientists in lobbying against the development of the H-bomb. In an attempt to alert the world, he helped draft a report that went so far as to describe Edward Tellers Super bomb as a weapon of genocideessentially, a threat to the future of the human raceand urged the nation not to proceed with a crash effort to develop bigger, ever more destructive thermonuclear warheads. In an effort to silence him, Teller and his faction of bigger-is-better physicists, together with officials in the U.S. Air Force who were eyeing huge defense contracts, cast aspersions on Oppenheimers character and patriotism and dug up old allegations about his ties to communism. In 1954, the Atomic Energy Commission, after a kangaroo court hearing, found him to be a loyal citizen but stripped him of his security clearance.

Last December, almost 70 years later, the U.S. Department of Energy restored Oppenheimers clearance, admitting that the trial had been flawed and that the verdict had less to do with genuine national security concerns than with his failure to support the countrys hydrogen bomb program. The reprieve came too late for the physicist, whose reputation had been destroyed, his public life as a scientist-statesman over. He died in 1967, relatively young, aged 62, still an outcast.

Altman and todays other lofty tech leaders would do well to note the terrible swiftness of Oppenheimers fall from gracefrom hero to villain in less than a decade. And how quick the government was to dispense with Oppenheimers advice once it had taken possession of his invention. The internet still remains unregulated in this country, but the European Union is considering labeling ChatGPT high risk. Italy has already banned OpenAIs service. Perhaps revealing a bit of nervousness that he has gotten ahead of himself, Altman responded to the open letter about temporarily halting the development of AI by taking to Twitter to gush about the demand that his company release a great alignment dataset, calling it one thing coming up in the debate about the pause letter I really agree with.

Nolans Oppenheimer epic will inevitably be a cautionary tale. The story of the nuclear weapons project illustrates, in the starkest terms, what happens when new science is developed too quickly, without any moral calculus, and how it can lead to devastating consequences that could not have been imagined at the outset.

See more here:

The Cautionary Tale of J. Robert Oppenheimer - Alta Magazine

Virgin Voyages and JLo Bust on A.I. To Sell Vacations – We Got This Covered

Photo via TikTok/JLo

Artificial Intelligence is nothing to play with even though apps are being handed out like toys for the world to enjoy. So, it looks like Jennifer Lopez wanted to have some fun with the idea in her latest commercial for Virgin Voyages and its hilarious.

Its no secret that JLo can sell anything from albums to movies and anything else she wants. What do AI and Virgin Voyages have to do with each other? Its the hope that Virgin Voyages isnt out there with AI captains on a ship steering the boat. The world just had a tragedy with the Titan and it was manned. We dont need another episode like that.

No, this is another thing entirely. This is a commercial and it has all the funny that AI can provide. Putting on those headsets that cover a persons eyes sends them to an entirely different environment, and its fun to be transported to the jungles of Africa or the beaches of Morocco. Just remember that another person can come behind you and put that same headset on and the virtual person takes on an entirely new personality.

Birthday. Anniversary. Because you just want to live in the NOW Let me personally invite your friends to celebrate at sea. Create a customized message using the @Virgin Voyages next Jen(eration) AI tool (link in bio).

Its not just a yacht! Its a super yacht!

How many of you want that commercial to go on forever with JLo doing all those personalities? I know I can watch it for days.

Everyones a Virgin now. Just to make it clear, WGTC doesnt sell tickets to the show. Were not affiliated or anything. We just like the commercial.

Contributing Writer at WGTC, Michael Allen is the author of 'The Deeper Dark' and 'A River in the Ocean,' both available on Amazon. At this time, 'The Deeper Dark' is also available on Apple Books. Currently under contract to write a full-length feature spy drama for producer/director Anton Jokikunnas.

Read the original here:

Virgin Voyages and JLo Bust on A.I. To Sell Vacations - We Got This Covered

European Commission set to propose an overhaul of rules for gene … – Chemistry World

A leaked document has revealed that the European Commission is set to recommend a radical rethink of how the EU regulates some genetically engineered crops. This would mean light or no regulation for gene-edited crops with DNA changes that could have occurred in nature.

The commission had previously concluded in 2021 that current EU legislation for new genomic techniques (NGTs) is not fit for purpose. Such techniques could reduce the use of pesticides on crops, allow crops to be better adapted for warmer climates or generate plants more resistant to pests and diseases.

EU regulations currently demand that plants with changes introduced by Crispr gene editing go through an onerous and expensive approval process. This places them on a par with genetically modified organisms (GMOs), which can contain genes introduced from entirely different organisms transgenes.

This followed a ruling by the European Court of Justice in July 2018 that gene edited crops are subject to the same 2001 regulations as GMOs. The decision set the EU apart globally and was criticised by many plant scientists for hamstringing crop biotech. Many environmental organisations support the 2018 position, however.

In a leaked document, the commission recommends substantial changes in regulating plants obtained by targeted mutagenesiswhen the changes could be achieved through conventional breeding.

Such plants would be treated similarly to conventional plants and would not require authorisation, risk assessment, traceability and labelling as GMOs, according to the document. A transparency register would be set up for these plants.

The draft also recommends that some leeway be given to gene-edited plants that could contribute to more sustainable agriculture, with labels potentially introduced to inform consumers.

The draft document emphasised that NGTs do not introduce genetic material from a non-crossable species, which is what happens with GMOs, and referenced the conclusion of the European Food Safety Authority that there are no new hazards linked to targeted mutagenesis compared with conventional breeding.

Plant scientist Agnes Ricroch at the University of Paris-Saclay and French Academy of Agriculture in France, welcomed the proposal, pointing to the Russian invasion of Ukraine and its impact on food supply in Europe, as well as the need to adapt crops for new climate conditions. We need to increase yields for wheat, corn, rapeseed, sunflower, she says. NGTs can accelerate the process of breeding, though it will still take time.

She notes that climate change is bringing new pests and diseases into Europe and farmers will need new crop varieties. She adds that the proposals would encourage plant scientists to innovate and perhaps launch biotech start-up companies.

This is a great step by the European Union, says Jon Entine, director of the Genetic Literacy Project, which published the leaked draft. This document suggests that were going to put the issue back in the hands of farmers and scientist. He adds that this doesnt mean that ideology and politics wont have a role in shaping regulations, but for the first time it will mean that Europe will not be a scientific laggard on these issues.

Many NGOs have expressed opposition to the proposals in the draft document, nonetheless. The assumption the commission makes that new GMOs would lead to more sustainability are based on industrys claims, instead of real evidence, said Nina Holland, a researcher at Corporate Europe Observatory, in a media release. Since NGT seeds will be patented, this will erode farmers rights, and it will lead to a further monopolisation of the already highly concentrated seed market.

Plant scientist Sjef Smeekens at Utrech University, the Netherlands, warns the EU will import gene-edited foodstuffs from elsewhere and no one will know, since countries such as the US, Japan and Canada allow them without registration. If we in the EU opt out of this system, then it will have severe consequences for our breeding industry and academic research in plant science, he adds.

The proposal is expected to be published on 5 July. It must go before the European parliament and the Council of Ministers that represents each of the 27 EU member states. This legal position if accepted must operate in all EU countries. If a country like France or Germany really objects, then this is dead, says Smeekens.

The UK introduced a new law earlier this year to permit some gene editing of crops or livestock.

View original post here:
European Commission set to propose an overhaul of rules for gene ... - Chemistry World

The European Union Is Getting Nervous About Atmosphere-Altering … – Gizmodo

The European Union is calling for international talks on a potential worldwide framework on how to treat and regulate deliberately atmosphere-altering tech, aka geoengineering.

What Is Carbon Capture? With Gizmodos Molly Taft | Techmodo

In a statement released today, the European Commission argued that the risks and long-term impacts of geoengineering arent well underst ood and that necessary regulations havent been developed. [The tech] could also increase power imbalances between nations, spark conflicts and raises a myriad of ethical, legal, governance and political issues, the commission wrote .

Nobody should be conducting experiments alone with our shared planet, Frans Timmermans, the European Union climate policy chief, said according to Reuters. This should be discussed in the right forum, at the highest international level.

Theres a reason why global leaders are worried about environment-altering projects going unchecked. Geoengineering describes new technologies and strategies that are used to help lower global temperatures. They include carbon capture projects, but a lot of that tech is relatively new and isnt always effective (and sometimes backed by big oil). Some geoengineering proposals are kind of alarming and feel like something out of a sci-fi horror film. This is especially true for solar radiation modification ideas, projects that seek to block the suns rays from reaching the planet.

Earlier this year, a group of researchers at Harvard and the University of Utah proposed a solution to shoot millions of tons of moon dust into Earths orbit to partially block out the Suns rays every year. An explosion on the moon that has catastrophic implications for the globe is actually the plot of a book series titled Life As We Knew It by Susan Beth Pfeffer. Spoilereverything went to hell and people died.

Some suggestions have been pretty cool concepts in theory. Last year, a team of researchers at the Massachusetts Institute of Technology announced an idea that included engineering a huge raft of bubbles that would be sent into outer space. The raft would sit between the Earth and the Sun and would be big enough to deflect sunlight away from the planet to slow down global warming. But sending rockets into space also creates emissions that will contribute to the overall problem.

Geoengineering might be our final and only option. Yet, most geoengineering proposals are earth-bound, which poses tremendous risks to our living ecosystem, an MIT web page explained. Which, hey, its cool in theory. But when would we have the technology (and the funding) to send a bubble raft into space?

And some forms of geoengineering have already been utilized in the U.S. Colorados Weather Modification Program has used cloud seeding to boost snowfall. This is when silver iodide particles are released into clouds to promote the generation of ice particles, which then turn into falling snow. Earlier this year, the Southern Nevada Water Authority accepted a more than $2 million grant to support more cloud seeding in Western states, the Associated Press reported. But this form of geoengineering requires specific conditionsit has to already snow for this to work, and there needs to be enough humidity in the atmosphere.

None of this is to say that the creativity used in geoengineering concepts isnt cool. But why not simply hold fossil fuel companies accountable for achieving record profits at the expense of the rest of us? And the EU isnt the first government to push for stricter regulations. This January, Mexicos government moved to ban solar geoengineering projects in the country after the startup Make Sunsets claimed that it released weather balloons filled with sulfur dioxide particles in the Mexican state of Baja California Sur. The sulfur dioxide particles are reflective and were supposed to block sunlight. However these particles could contribute to acid rain and can irritate peoples lungs, The Verge reported. A few balloons may not cause those problems, but the company conducted this experiment without any sort of approval from Mexican authorities.

Want more climate and environment stories? Check out Earthers guides to decarbonizing your home, divesting from fossil fuels, packing a disaster go bag, and overcoming climate dread. And dont miss our coverage of the latest IPCC climate report, the future of carbon dioxide removal, and the invasive bugs you should squash on sight.

Go here to see the original:
The European Union Is Getting Nervous About Atmosphere-Altering ... - Gizmodo

The European Union Election Observation Mission presented its … – EEAS

The European Election Observation Mission (EU EOM) to Nigeria has today published its final report on the federal and state elections of 25 February and 18 March. The Chief Observer, Barry Andrews, Member of the European Parliament, stated: In the lead up to the 2023 general elections Nigerian citizens demonstrated a clear commitment to the democratic process. That said, the election exposed enduring systemic weaknesses and therefore signal a need for further legal and operational reforms to enhance transparency, inclusiveness, and accountability.

Following a three-month-long observation across Nigeria, and in accordance with its usual practice, the EU EOM is now pleased to present its findings and recommendations. Shortcomings in law and electoral administration hindered the conduct of well-run and inclusive elections and damaged trust in INEC. With the aim of contributing to the improvement of future elections, the EU EOM is offering 23 recommendations for consideration by the Nigerian authorities.

We are particularly concerned about the need for reform in six areas which we have identified as priority recommendations, and we believe, if implemented, could contribute to improvements for the conduct of elections. said Barry Andrews.

The six priority recommendations point to the need to (1) remove ambiguities in the law, (2) establish a publicly accountable selection process for INEC members, (3) ensure real-time publication of and access to election results, (4) provide greater protection for media practitioners, address (5) discrimination against women in political life, and (6) impunity regarding electoral offenses.

Chief Observer, Barry Andrews, noted: Importantly, there is a need for political will to achieve improved democratic practices in Nigeria. Inclusive dialogue between all stakeholders on electoral reform remains crucial. The European Union stands ready to support Nigerian stakeholders in the implementation of these recommendations.

At the invitation of the Independent Electoral Commission of Nigeria, the EU EOM carried out its work between 11 January and 11 April. A delegation of the European Parliament joined the EU EOM for the observation of the Presidential and National Assembly elections. The mission accredited a total of 110 observers from 25 EU Member States, as well as Norway, Switzerland, and Canada.

Read the original here:
The European Union Election Observation Mission presented its ... - EEAS