Archive for the ‘Artificial Super Intelligence’ Category

15 Super Cool Wallpapers for iPhone and Android – YMWC 18 – YTECHB

We are back with another monthly wallpaper collection! This months collection feature 15 super cool aesthetics wallpaper which you can use on your iPhone or Android smartphone. Similar to our previousYTECHB Monthly Wallpaper Collection (YMWC), all wallpapers are available in high resolution.

For this months collection, I have curated select wallpapers using artificial intelligence (AI) tools, some picked from social media platforms, some shared by our telegram channel members, and the remaining is tweaked versions of some gorgeous stock wallpapers. These wallpapers are designed to enhance the visual appeal of your devices and provide a unique and captivating background. Feel free to download and enjoy the collection on your devices!

The collection of 15 wallpapers features minimalist, abstract, dark, gradients, nature, and a few aesthetic backgrounds. You can follow us on Twitter (@YTECHB), Google News (YTECHB), or join us on Telegram (@YTECHB) for more updates.

Now lets take a look at the wallpapers available in this months wallpaper collection.

Sceneries indeed have a mesmerizing appeal, and when combined with a neon effect, they create a stunning visual experience. Reddit user (u/Just_indrit) shared this stunning wallpaper, heres your look at the preview.

Download Link

The second background in the list is a minimalist wallpaper, shared on our Telegram channel. The wallpaper features minimal art of a standing deer and birds flying in the sky. Check out the preview of the wallpaper.

Download Link

For those who love minimalist wallpapers, heres a stunning lighthouse background on the beachside. We created this amazing wallpaper with artificial intelligence, you can check out some more here.

Download Link

If you like dark backgrounds, then we have AMOLED Supercar wallpaper, shared by u/wallpop_02. This background has 74 percent of darkness, you will love using it in dark mode. See the wallpaper preview here.

Download Link

The next background in the list is another minimal wallpaper, showing a mind-blowing artwork. This is another wallpaper created using artificial intelligence, heres the preview of the wallpaper.

Download Link

Abstract wallpapers are amazing. Heres another abstract wallpaper that features a flower leaves texture with a glass effect. It looks great, especially with the sky-blue background. Check out the preview of the wallpaper.

Download Link

The next wallpaper on the list is from the popular American epic series Star Wars. As it is an AMOLED wallpaper, it has 50 percent of blackness, shared by u/LukeTheGeek in the AMOLED Backgrounds subreddit. See the preview of the wallpaper here.

Download Link

Another addition to the list is a beautiful minimal background featuring scenery. This artwork showcases boats on a lake with majestic mountains in the background. This is another wallpaper created using new technology.

Download Link

The next abstract wallpaper on the list is an abstract background available on one of the ZTE smartphones. If you are looking for a simple abstract wallpaper, then you can give a try to this wallpaper.

Download Link

Vibrant colorful backgrounds are awesome and the next one is no exception. This one is a stock wallpaper of Wiko Hi Enjoy 60 shared on our Telegram channel. See the preview of the wallpaper.

Download Link

Meizu 20 series phones are packed with some amazing colorful wallpapers. We pick four wallpapers from the series for this months collection, see the preview here. Check the complete collection of Meizu 20 wallpapers here.

Download Links Red | Yellow | Light Gradient | Dark Gradient

The Earth wallpaper on the iPhone XS series is undeniably stunning, and we often come across similar backgrounds on the web. Today, we have another similar wallpaper shared by u/JerryGuptaa.

Download Link

The collection is not over yet, check out our previous monthly wallpaper collection. Also, which image is your favorite from this list, let us know in the comment section.

In case you missed previous YMWC collections YMWC 1|YMWC 2|YMWC 3|YMWC 4|YMWC 5|YMWC 6|YMWC 7|YMWC 8|YMWC 9|YMWC 10|YMWC 11

Make sure to share this article with your friends, across social media.

Check More Wallpapers Collection:

Link:

15 Super Cool Wallpapers for iPhone and Android - YMWC 18 - YTECHB

PUB CHAT: Changing lives congrats to all grads and those who … – Finger Lakes Times

I cant remember whether we were studying Greek myths in particular. Or creative writing. Probably a combination. I definitely dont think it was math or social studies or anything like that.

I do know that it was 50 years ago, and I was in seventh grade. The assignment was to write something in the style of a Greek myth. Im sure our teacher, Mr. McKee, must have said that the subject matter didnt matter.

What these past five decades have not erased from my foggy brain is the fact that I wrote about football specifically, an NFL game involving my favorite Minnesota Vikings beating up on my least favorite Dallas Cowboys. In these days of AI, after a few quick keystrokes, you could have something like that whipped out and at your disposal in milliseconds. In those youthful, pre-computer, prehistoric days, the only intelligence we had was not artificial it was between our ears.

I wish I still had that paper, but I sure dont who keeps papers from seventh grade? Im guessing I wrote about a clash of titans where the Vikings completed their odyssey and smote (or is it smited?) their nemesis via a Herculean effort.

Or something like that.

Whatever words I used, Mr. McKee gave me an A for the endeavor, and hand-wrote this note on the top of the paper: You should think about becoming a sports writer.

My SI (for Super Intelligence) response: Huh?

I mean, it was seventh grade. What are we, about 12 or 13 years old at that time? I hadnt thought about becoming anything at that point, other than maybe the top home run hitter in our neighborhood Wiffle ball league.

But a sports writer? Hmmmm. Actually, it didnt sound too shabby. I loved sports, all sports. And I loved reading and writing even made a few homemade newspapers (some of which I actually do still have thanks to Mom preserving them).

So, probably that very night after surely slugging three or four homers over the telephone wires in the street out in front of my friends house I started really thinking about it.

And that led me to concentrating more in writing and English classes in junior high and high school, which led me to joining the school newspaper in high school, which led me to working internships for my local weekly and then daily newspapers, which led me to majoring in journalism in college, which led me to becoming sports editor and then editor-in-chief of my college paper, which led me to first general news reporting, then sports writing and a professional career in journalism, which led me to the very seat that I occupy today as I write this Pub Chat, publisher of the Finger Lakes Times.

To quote Spencer Tulis favorite band, the Grateful Dead, what a Long Strange Trip its been, and all because Mr. McKee took a little extra time and instead of just marking a grade on that paper, wrote that simple sentence all those 50 years ago.

He has since passed and is now teaching in heaven. I never got the chance to thank him for inspiring me, but my sister knew him later in life. I told her this story once and asked her to thank him for me if she ran into him, which she said she did.

What got me traveling down this road in todays Pub Chat is the fact that while Independence Day is bearing down on us, it also is commencement time around these parts. Managing Editor Alan Brignall is working diligently putting together our special graduation section, which will be published in next weekends July 8 edition of the FLT. Its intended to be a keepsake for all members of the Class of 2023 from our area high schools and their families, and my hearty congrats go out to all those students who are ready to embark on the next exciting chapters of their lives, whatever those chapters may be.

But I also wonder how many Mr. McKees are out there teachers, educators, parents, other adults who went maybe an extra mile, maybe just an extra yard or two, to spark an interest, to suggest a path, to light a fire in a young person that changed or influenced his or her life. High school graduation is a testament to them, too.

Here is the original post:

PUB CHAT: Changing lives congrats to all grads and those who ... - Finger Lakes Times

AI poses an existential threat, according to Munk Debates crowd … – The Hub

More than two-thirds of the Munk Debates crowd came into Roy Thomson Hall last week believing that artificial intelligence poses an existential threat to humanity and the debate-goers left mostly unshaken, with only three percent of the audience changing its mind after the final arguments had been made.

Over the last year, discourse about AI has greatly intensified with the release of Chat GPT and other AI-driven, publicly available technologies. In the wake of these developments, high-profile AI experts debated the resolution, Be it resolved, AI research and development poses an existential threat.

Arguing on the pro-side of the resolution was Yoshua Bengio, a professor at the Universit de Montral, and founder and scientific director of the Mila Quebec AI Institute, who won the 2018 A.M. Turing Award in the field of computing. Alongside him was Max Tegmark, a professor performing AI and physics research at MIT.

On the con side was Melanie Mitchell, a professor at the Santa Fe Institute who has authored and edited several books and papers on AI and related science and technologies. Also on the con side was Yann LeCun, VP & chief AI scientist at Meta and Silver Professor at NYU.

During the debate, Tegmark asked the con side if they had any evidence that AI will not pose an existential threat to humanity.

What do you actually think the probability is that we are going to get superhuman intelligence, say, in 20 years, say, in 100 years? asked Tegmark. What is your plan for how to make it safe? What is your plan for how were going to make sure that the goals of an AI are always aligned with humans?

LeCun said that such scenarios cannot be fully disproven but compared them to a claim that a teapot flew around Saturn also being disprovable. He added that when jet planes were being developed in the 1930s, supersonic trans-Atlantic jets would have been regarded as impossible, and were only built decades later.

I think a lot of the fears around AI are predicated on the idea that somehow there is a hard takeoff, which is that the minute you turn on an AI system that is capable of human intelligence or superintelligence, its going to take over the world within minutes, said LeCun. This is preposterous.

Bengio said companies that develop AI are likely to be more interested in profit-making and beating their competition, rather than aligning their products with the needs of society.

What Max and I and others are saying is not, necessarily, theres going to be a catastrophe but that we need to understand what can go wrong so that we can prepare for it, said Bengio.

Mitchell replied that the risk of anything is non-zero and that there is always the possibility that aliens may arrive and destroy Earth at any given moment, but that is highly unlikely. She pointed out that all of AIs intelligence is derived from human data and lacks the capacity to understand the world, and that negative predictions about AI are not a new phenomenon.

The whole history of AI has been a history of failed predictions. Back in the 1950s and 60s, people were predicting the same thing about super-intelligent AI and talking about existential risk, but it was wrong then. Id say its wrong now, said Mitchell.

Towards the end of the debate, Tegmark referenced the warnings made by Geoffrey Hinton, sometimes called the godfather of AI, who has stated that AI has the potential to manipulate and replace humans with its faster, automated thinking.

I feel a little bit like were on this big ship sailing south from here down in the Niagara River and Yoshua is like, I heard there might be a waterfall down there. Maybe this isnt safe, and Melanie is saying, Well, Im not convinced that there even is a waterfall, even though Geoff Hinton says there is, said Tegmark.

Mitchell responded by reiterating that similar fears had been expressed 80 years ago without coming to fruition.

That happened in 1960, not by Geoffrey Hinton, but people like Claude Shannon and Herbert Simon, and they were just dead wrong, said Mitchell.

At the start of the debate, 67 percent of the audience listed themselves on the pro side, while 33 percent were on the con side. When it was over, the con side won by convincing 3 percent of the audience to change their initial position. While the con side did win according to the debate rules, the vast 64 percent majority of the audience remained on the pro side.

From the outset, Tegmark argued that superhuman AI can surpass revolutionary technologies like nuclear bombs, possessing greater intelligence without human emotions or empathy. Tegmark also highlights concerns about malicious use and the replacement of decision-making roles by AI.

LeCun countered by stating that current AI systems, like self-driving cars, have limited capabilities and lack reasoning and understanding of the world. He mentioned that existing fears about AI, such as spreading misinformation, already exist on social media, which can be addressed through counter-measures using AI tools. LeCun proposes objective-driven AI with constraints and subservient emotions to ensure safety.

Bengio expressed concern about machines gaining self-preservation goals, leading to the desire to control humans for survival.

On the other hand, Mitchell argued that fears about AI are rooted in human psychology and not supported by science or evidence. She believes that AI does not pose an existential threat in the near future, and emphasizing such concerns diverts attention from real risks and hinders the potential benefits of technological progress.

Here is the original post:

AI poses an existential threat, according to Munk Debates crowd ... - The Hub

The Cautionary Tale of J. Robert Oppenheimer – Alta Magazine

When Christopher Nolans blockbuster biopic of the theoretical physicist J. Robert Oppenheimer, the so-called father of the atomic bomb, drops in theaters on July 21, moviegoers might be forgiven for wondering, Why now? What relevance could a three-hour drama chronicling the travails and inner torment of the scientist who led the Manhattan Projectthe race to develop the first nuclear weapon before the Germans during World War IIpossibly have for todays 5G generation, which greets each new technological advance with wide-eyed excitement and optimism?

But the film, which focuses on the moral dilemma facing Oppenheimer and his young collaborators as they prepare to unleash the deadliest device ever created by mankind, aware that the world will never be the same in the wake of their invention, eerily mirrors the present moment, as many of us anxiously watch the artificial intelligence doomsday clock countdown. Surely as terrifying as anything in Nolans war epic is the New York Times recent account of OpenAI CEO Sam Altman, sipping sweet wine as he calmly contemplates a radically altered future; boasting that he sees the U.S. effort to build the bomb as a project on the scale of his GPT-4, the awesomely powerful AI system that approaches human-level performance; and adding that it was the level of ambition we aspire to.

This article appears in Issue 24 of Alta Journal. SUBSCRIBE

If Altman, whose company created the chatbot ChatGPT, is troubled by any ethical qualms about his unprecedented artificial intelligence models and their potential impact on our lives and society, he is not losing any sleep over it. He sees too much promise in machine learning to be overly worried about the pitfalls. Large language models, the types of neural network on which ChatGPT is built, enable everything from digital assistants like Siri and Alexa to self-driving cars and computer-generated tweets and term papers. The 37-year-old AI guru thinks its all goodtransformative change. He is busy creating tools that empower humanity and cannot worry about all their applications and outcomes and whether there might be what he calls a downside.

Just this March, in an interview for the podcast On with Kara Swisher, Altman seemed to channel his hero Oppenheimer, asserting that OpenAI had to move forward to exploit this revolutionary technology and that it requires, in our belief, this continual deployment in the world. As with the discovery of nuclear fission, AI has too much momentum and cannot be stopped. The net gain outweighs the dangers. In other words, the market wants what the market wants. Microsoft is gung ho on the AI boom and has invested $13 billion in Altmans technology of the future, which means tools like robot soldiers and facial recognitionbased surveillance systems might be rolled out at record speed.

We have seen such arrogance before, when Oppenheimer quoted from the Hindu scripture the Bhagavad Gita in the shadow of the monstrous mushroom cloud created by the Trinity test explosion in the Jornada Del Muerto Desert, in New Mexico on July 16, 1945: Now I have become Death, destroyer of worlds. No man in history had ever been charged with developing such a powerful scientific weapon, an apparent affront to morality and sanity, that posed a grave threat to civilization yet at the same time proceeded with all due speed on the basis that it was virtually unavoidable. The official line was that it was a military necessity: the United States could not allow the enemy to achieve such a decisive weapon first. The bottom line is that the weapon was devised to be used, it cost upwards of $2 billion, and President Harry Truman and his top advisers had an assortment of strategic reasonshello, Soviet Unionfor deploying it.

Back in the spring of 1945, a prominent group of scientists on the Manhattan Project had voiced their concerns about the postwar implications of atomic energy and the grave social and political problems that might result. Among the most outspoken were the Danish Nobel laureate Niels Bohr, the Hungarian migr physicist Leo Szilard, and the German migr chemist and Nobel winner James Franck. Their mounting fears culminated in the Franck Report, a petition by a group from the projects Chicago laboratory arguing that releasing this indiscriminate destruction upon mankind would be a mistake, sacrificing public support around the world and precipitating a catastrophic arms race.

The Manhattan Project scientists also urged policymakers to carefully consider the questions of what the United States should do if Germany was defeated before the bomb was ready, which seemed likely; whether it should be used against Japan; and, if so, under what circumstances. The way in which nuclear weaponsare first revealed to the world, they noted, appears to be of great, perhaps fateful importance. They proposed performing a technical demonstration and then giving Japan an ultimatum. The writers of the Franck Report wanted to explore what kind of international control of atomic energy and weapons would be feasible and desirable and how a strict inspection policy could be implemented. The shock waves of the Trinity explosion would be felt all over the world, especially in the Soviet Union. The scientists foresaw that the nuclear bomb could not remain a secret weapon at the exclusive disposal of the United States and that it inexorably followed that rogue nations and dictators would use the bomb to achieve their own territorial ambitions, even at the risk of triggering Armageddon.

Fast-forward to the spring of 2023, when more than 1,000 tech experts and leaders, such as Tesla chief Elon Musk, Apple cofounder Steve Wozniak, and entrepreneur and 2020 presidential candidate Andrew Yang, sounded the alarm on the unbridled development of AI technology in a signed letter warning that the AI systems present profound risks to society and humanity. AI developers, they continued, are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no onenot even their creatorscan understand, predict, or reliably control.

The open letter called for a temporary halt to all AI research at labs around the globe until the risks can be better assessed and policymakers can create the appropriate guardrails. There needs to be an immediate pause for at least 6 months, it stated, on the training of AI systems more powerful than GPT-4, which has led to the rapid development and release of imperfect tools that make mistakes, fabricate information unexpectedly (a phenomenon AI researchers have aptly dubbed hallucination), and can be used to spread disinformation and further the grotesque distortion of the internet. This pause, the signatories wrote, should be used to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts, and they urged policymakers to roll out robust AI governance systems. How the letters authors hope to enforce compliance and prevent these tools from falling into the hands of authoritarian governments remains unclear.

Geoffrey Hinton, a pioneering computer scientist who has been called the godfather of AI, did not sign the letter but in May announced that he was leaving Google in order to freely express his concerns about the global AI race. He is worried that the reckless pace of advances in machine superintelligence could pose a serious threat to humanity. Until recently, Hinton thought that it was going to be two to five decades before we had general-purpose AIwith its wide range of possible uses, both intended and unintendedbut the trailblazing work of Google and OpenAI means the ability of AI systems to learn and solve any task with something approaching human cognition looms directly ahead, and in some ways they are already eclipsing the capabilities of the human brain. Look at how it was five years ago and how it is now, Hinton said of AI technology. Take the difference and propagate it forwards. Thats scary.

Until this year, when people asked Hinton how he could work on technology that was potentially dangerous, he would always paraphrase Oppenheimer to the effect that when you see something that is technically sweet, you go ahead and do it. He is not sanguine enough about the future iterations of AI to say that anymore.

Now, as during the Manhattan Project, there are those who argue against any moratorium on development for fear of the United States losing its competitive edge. ExGoogle CEO Eric Schmidt, who has expressed concerns about the possible misuse of AI, does not support a hiatus for the simple reason that it would benefit China. Schmidt is in favor of voluntary regulation, which he has described somewhat lackadaisically as letting the industry try to get its act together. Yet he concedes that the dangers inherent in AI itself may pose a larger threat than any global power struggle. I think the concerns could be understated. Things could be worse than people are saying, he told the Australian Financial Review in April. You have a scenario here where you have these large language models that, as they get bigger, have emergent behavior we dont understand.

If Nolan is true to form, audiences may find the personal dimension of Oppenheimer even more chilling than the IMAX-enhanced depiction of hair-raising explosions. The director has said that he is not interested in the mechanics of the bomb; rather, what fascinates him is the paradoxical and tragic nature of the man himself. Specifically, the movie will examine the toll inventing a weapon of mass destruction takes on an otherwise peaceable, dreamy, poetry-quoting blackboard theoretician, whose only previous brush with conflict was the occasional demonstration on UC Berkeleys leafy campus.

One of the things that would haunt Oppenheimer was his decision, as head of the scientific panel chosen to advise on the use of the bomb, to argue that there was no practical alternative to military use of the weapon. He wrote to Secretary of War Henry Stimson in June 1945 that he did not feel it was the panels place to tell the government what to do with the invention: It is clear that we, as scientific men, have no proprietary rights [and]no claim to special competence in solving the political, social, and military problems which are presented by the advent of atomic power.

Even at the time, Oppenheimer was already in the minority: most of the project scientists argued vehemently that they knew more about the bomb, and had given more thought to its potential dangers, than anyone else. But when Leo Szilard tried to circulate a petition rallying the scientists to present their views to the government, Oppenheimer forbade him to distribute it at Los Alamos.

Universal History Archive

After the two atomic attacks on Japanfirst Hiroshima on August 6 and then, just three days later, Nagasaki on August 9the horror of the mass killings, and of the unanticipated and deadly effects of radiation poisoning, forcefully hit Oppenheimer. In the days and weeks that followed, the brilliant scientific leader who had been drawn to the bomb project by ego and ambition, and who had skillfully helmed the secret military laboratory at Los Alamos in service of his country, was undone by the weight of responsibility for what he had wrought on the world. Within a month of the bombings, Oppenheimer regretted his stand on the role of scientists. He reversed his position and began frantically trying to use his influence and celebrity as the father of the A-bomb to convince the Truman administration of the urgent need for international control of nuclear power and weapons.

The film will almost certainly include the famous, or infamous, scene when Oppenheimer, by then a nervous wreck, burst into the Oval Office and dramatically announced, Mr. President, I feel I have blood on my hands. Truman was furious. I told him, the president said later, the blood was on my handsto let me worry about that. Afterward, Truman, who was struggling with his own misgivings about dropping the bombs and what it would mean for his legacy, would denounce Oppenheimer as that cry-baby scientist.

In the grip of his postwar zealotry, Oppenheimer became an outspoken opponent of nuclear proliferation. He was convinced no good could come of the race for the hydrogen bomb. Just months after the Soviet Unions successful test of an atomic bomb in 1949, he joined other eminent scientists in lobbying against the development of the H-bomb. In an attempt to alert the world, he helped draft a report that went so far as to describe Edward Tellers Super bomb as a weapon of genocideessentially, a threat to the future of the human raceand urged the nation not to proceed with a crash effort to develop bigger, ever more destructive thermonuclear warheads. In an effort to silence him, Teller and his faction of bigger-is-better physicists, together with officials in the U.S. Air Force who were eyeing huge defense contracts, cast aspersions on Oppenheimers character and patriotism and dug up old allegations about his ties to communism. In 1954, the Atomic Energy Commission, after a kangaroo court hearing, found him to be a loyal citizen but stripped him of his security clearance.

Last December, almost 70 years later, the U.S. Department of Energy restored Oppenheimers clearance, admitting that the trial had been flawed and that the verdict had less to do with genuine national security concerns than with his failure to support the countrys hydrogen bomb program. The reprieve came too late for the physicist, whose reputation had been destroyed, his public life as a scientist-statesman over. He died in 1967, relatively young, aged 62, still an outcast.

Altman and todays other lofty tech leaders would do well to note the terrible swiftness of Oppenheimers fall from gracefrom hero to villain in less than a decade. And how quick the government was to dispense with Oppenheimers advice once it had taken possession of his invention. The internet still remains unregulated in this country, but the European Union is considering labeling ChatGPT high risk. Italy has already banned OpenAIs service. Perhaps revealing a bit of nervousness that he has gotten ahead of himself, Altman responded to the open letter about temporarily halting the development of AI by taking to Twitter to gush about the demand that his company release a great alignment dataset, calling it one thing coming up in the debate about the pause letter I really agree with.

Nolans Oppenheimer epic will inevitably be a cautionary tale. The story of the nuclear weapons project illustrates, in the starkest terms, what happens when new science is developed too quickly, without any moral calculus, and how it can lead to devastating consequences that could not have been imagined at the outset.

See more here:

The Cautionary Tale of J. Robert Oppenheimer - Alta Magazine

Virgin Voyages and JLo Bust on A.I. To Sell Vacations – We Got This Covered

Photo via TikTok/JLo

Artificial Intelligence is nothing to play with even though apps are being handed out like toys for the world to enjoy. So, it looks like Jennifer Lopez wanted to have some fun with the idea in her latest commercial for Virgin Voyages and its hilarious.

Its no secret that JLo can sell anything from albums to movies and anything else she wants. What do AI and Virgin Voyages have to do with each other? Its the hope that Virgin Voyages isnt out there with AI captains on a ship steering the boat. The world just had a tragedy with the Titan and it was manned. We dont need another episode like that.

No, this is another thing entirely. This is a commercial and it has all the funny that AI can provide. Putting on those headsets that cover a persons eyes sends them to an entirely different environment, and its fun to be transported to the jungles of Africa or the beaches of Morocco. Just remember that another person can come behind you and put that same headset on and the virtual person takes on an entirely new personality.

Birthday. Anniversary. Because you just want to live in the NOW Let me personally invite your friends to celebrate at sea. Create a customized message using the @Virgin Voyages next Jen(eration) AI tool (link in bio).

Its not just a yacht! Its a super yacht!

How many of you want that commercial to go on forever with JLo doing all those personalities? I know I can watch it for days.

Everyones a Virgin now. Just to make it clear, WGTC doesnt sell tickets to the show. Were not affiliated or anything. We just like the commercial.

Contributing Writer at WGTC, Michael Allen is the author of 'The Deeper Dark' and 'A River in the Ocean,' both available on Amazon. At this time, 'The Deeper Dark' is also available on Apple Books. Currently under contract to write a full-length feature spy drama for producer/director Anton Jokikunnas.

Read the original here:

Virgin Voyages and JLo Bust on A.I. To Sell Vacations - We Got This Covered