Media Search:



What Is Image-to-Image Translation? | Definition from TechTarget – TechTarget

What is image-to-image translation?

Image-to-image translation is a generative artificial intelligence (AI) technique that translates a source image into a target image while preserving certain visual properties of the original image. This technology uses machine learning and deep learning techniques such as generative adversarial networks (GANs); conditional adversarial networks, or cGANs; and convolutional neural networks (CNNs) to learn complex mapping functions between input and output images.

Image-to-image translation allows images to be converted from one form to another while retaining essential features. The goal is to learn a mapping between the two domains and then generate realistic images in whatever style a designer chooses. This approach enables tasks such as style transfer, colorization and super-resolution, a technique that improves the resolution of an image.

The image-to-image technology encompasses a diverse set of applications in art, image engagement, data augmentation and computer vision, also known as machine vision. For instance, image-to-image translation allows photographers to change a daytime photo to a nighttime one, convert a satellite image into a map and enhance medical images to enable more accurate diagnoses.

Image processing systems using image-to-image translation require the following basic steps:

A critical aspect of image-to-image translation is ensuring the model generalizes well in response to previously unseen or unsupervised scenarios. Cycle consistency and unsupervised learning help to ensure that if an image is translated from one domain to another and then back, it returns to its original form. Deep learning architectures, such as U-Net and CNNs, are also commonly used because they can capture complex spatial relationships in images. In the training process, batch normalization and optimization algorithms are used to stabilize and expedite convergence.

The two main approaches to image-to-image translation are supervised and unsupervised learning.

Supervised methods rely on paired training data, where each input image has a corresponding target image. Using this approach, the generated image system learns the direct mapping that's required between the two domains. However, obtaining paired data can be challenging and time-consuming, especially when dealing with complex image transformation.

Unsupervised methods tackle the image-to-image translation problem without paired training examples. One prominent unsupervised approach is CycleGAN, which introduces the concept of cycle consistency. This involves two mappings: from the source domain to the target domain and vice versa. CycleGAN ensures the target domain is similar to the original source image.

Image-to-image translation and generative AI in general are touted for being cost-effective, but they're also criticized for lacking creativity. It's essential to research the various AI models that have been developed to handle image-to-image translation tasks, as each comes with its own unique benefits and drawbacks. Research groups such as Gartner also urge users and generative AI developers to look for trust and transparency when choosing and designing models.

Some of the most popular models include the following:

Image-to-image translation is a popular generative AI technology. Learn the eight biggest generative AI ethical concerns.

Read more from the original source:

What Is Image-to-Image Translation? | Definition from TechTarget - TechTarget

There is probably an 80% consensus that free will is actually … – CTech

Dr. Tomas Chamorro-Premuzic and James Spiro

(Photo: Zoom/Sinay David)

On a philosophical or testimonial level, if you look at most of the mainstream science, neuroscience, behavioral science, there is probably 80% consensus that free will is actually overrated or overstated, said Dr. Tomas Chamorro-Premuzic, author of I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. We think we are in control of the decisions we make, but actually there are so many serendipitous and biologically driven courses of our decision.

Dr. Tomas Chamorro-Premuzic is an organizational psychologist who works mostly in the areas of personality profiling, people analytics, talent identification, the interface between human and artificial intelligence, and leadership development. He is the Chief Innovation Officer at ManpowerGroup, a professor of business psychology at University College London and at Columbia University, co-founder of deepersignals.com, and an associate at Harvards Entrepreneurial Finance Lab.

He is the writer behind books such as Why Do So Many Incompetent Men Become Leaders?, The Future of Recruitment: Using the New Science of Talent Analytics to Get Your Hiring Right, and this years I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. Joining CTech for its new BiblioTech video series, he discusses the integration of AI into our lives and how we can keep our unique creativity and value in an increasingly digital world.

Leaving aside these philosophical discussions what I highlight in the book is that if we get to a point where our decisions are so predictable that AI can make most of these decisions, even if we are not automated and replaced by AI, surely we need to question our sense of subjective free will?

Many of the topics that Chamorro-Premuzic addresses in the book relate to the impact that AI will have on our lives and how different generations might respond to the algorithms living beside us. For example, he cites tech leaders like Bill Gates and Elon Musk, who present concerning views of AI, but also respond positively to how Gen Z might learn to adopt such technologies.

One of the things that the digital age has introduced is ever more and more ADD-like behaviors, he continued. We are pressed to do things quicker and quicker. And therefore there are few rewards for pausing and thinking.

Even though he believes humans are perfectly capable of stopping and taking time to consider their thoughts and actions, most of the decisions today in the AI age are so fast that they become very predictable and therefore easily outsourced to machines.

Gen Z and the next generation will need to showcase their expertise in a different area or a different way, he told CTech. Expertise is mutating from knowing a lot of answers to asking the right questions - from memorizing and retrieving facts to knowing how, why, and where the facts are wrong Demonstrating and cultivating expertise is a big challenge for the young generations.

Tomas, in your book you tackle one of the biggest questions facing our species: "Will we use artificial intelligence to improve the way we work and live, or will we allow it to alienate us?" Why did you find that now was the moment that this question needed to be asked and why did your book come out when it did?

I wrote 4-5 years ago that AI could be a really powerful tool to translate data and make leadership selection more data-driven with my first book, Why Do So Many Incompetent Men Become Leaders? (And How to Fix it). Then came The Future of Recruitment: Using the New Science of Talent Analytics to Get Your Hiring Right, which was about practical advice on how organizations can do that. Then, I was already contracted to do a new book during the pandemic, and on a personal level I found myself interacting with AI so much and interacting with other humans so little, that I thought this thing was really about to take off especially if we will be in lockdown for a while.

I started to look at the wider impact of AI and human behavior. Coincidentally the book was due to launch when OpenAI released ChatGPT which I always say is good and bad. Its good because there is more interest now for a book that explores the implications for human intelligence and human creativity in an age where we can outsource much of our thinking to machines. And it's bad because I had to write it myself, I couldn't rely on ChatGPT to write it! I think the next one will probably be written by AI and I will edit it!

I'd like to highlight what some of the tech leaders of today have said about AI, which you address at the start of your book:

You comment that Bill Gates is concerned about super intelligence; Stephen Hawking noted that Super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we are in trouble. Finally, you highlight how Elon Musk labeled AI a fundamental risk to the existence of human civilization - although you point out it hasn't stopped him from trying to implant it into our brains.

Tomas, why are we pursuing such a scary and unknown technology?

We're pursuing it mostly for two reasons. First, over the past 10 years, we have amassed so much data that we dont have enough human resources or human intelligence available to analyze that data. Also, we had to rely on a large language model, or some version of AI, to help us make sense of the data and actually make decisions in a more efficient, quick, and effortless way which is needed in a work that is so complex.

The second reason is that human beings are very lazy. We love to optimize everything for familiarity, for predictability. You could either sit down to watch any movie that Netflix recommends to you and after five seconds youll be watching a movie, or you could do what I do which is dismiss the algorithm, dig deeper, and waste two hours of my life. By the time I actually find the movie I want to watch it is time to go to sleep. We are trading off efficiency, which means lazy, fast, and furious decision-making, for deep, thoughtful, and expert-like decisions.

It is the same whether we are choosing a job, a romantic partner, a restaurant, a hotel, or what we consume in terms of news. This is why AI has been introduced as a potential tool that can enhance our productivity. Even if we're not necessarily going to invest whatever savings we gain from the productivity that AI uses into more thoughtful, creative, and intellectually fulfilling activities. Therein lies the problem.

I want to address some of the more nefarious things you mention and some of the ways that AI is affecting us in ways we don't understand. We speak about AI in the world, but how much choice do we have and how much is just an illusion of choice?

On a philosophical or testimonial level, if you look at most of the mainstream science, neuroscience, behavioral science, there is probably around 80% consensus that free will is actually overrated or overstated. But it is mostly an illusion. We think we are in control of the decisions we make but actually, there are so many serendipitous and biologically driven courses of our decisions.

Leaving aside these philosophical discussions which are hard to verify and often don't mean much to the average consumer, it is clear to me: If we get to a point where our decisions are so predictable that AI can make most of these decisions, even if we are not automated and replaced by AI, surely we need to question our sense of subjective free will?

If when I'm writing an email to you and Googles auto-complete version is correct 95% of the time, then I have to wonder whether I really am an agentic creative human that still has some choice or whether it's more deterministic than we think. I think the way to think about these issues is that we are mostly free to choose, or at least we feel we are free to choose, but that doesn't necessarily mean we want to pause, think, and choose. One of the things that the digital age has introduced is even more and more ADD-like behaviors. We are pressed to do things quicker and quicker and therefore there are few rewards for pausing and thinking, which explains the rise of things like mindfulness movements, apps, and people who do digital detoxes.

We are perfectly capable of pausing and thinking, but most of the decisions we are making in the AI age are so fast that they become very predictable and therefore they can be outsourced to machines.

I'd like to elaborate on what you mention in the book which you call a "Crisis of Distractability". I think it really sums up where so many of us are today online. What did you mean by that and how is it manifesting itself in recent years?

Around 11 years ago I went to a digital marketing conference where you had all the big tech firms. For the first time, some people were introducing the notion of the second screen, which was very counterintuitive and bold at the time. People were watching TV and holding their iPads, or they were looking at their smartphones and now theres a second screen market.

Now, we all have 3-4 screens that we interact with all the time. Life itself has been downgraded to a distraction. You're almost distracted when you can't pay attention to your apps or your social media feeds. You get FOMO if you can't interact with people digitally and you have to pay attention to the analog world.

In terms of productivity, I think this is really important because even though we keep on arguing about whether technology and GenAI are going to lead to a productivity gain or the demise of human civilization, the tech firms keep telling us it will make us healthier, fitter, happier, and more productive.

Actually the productivity data is very clear. Our productivity went up between 2000-2008 in the first wave of the digital revolution, only to start to stagnate or stall after that, after the advent of social media. Roughly 60-75% of smartphone use occurs during working hours when they're working from home or in an office and 70% of workers report being distracted. In the U.S. alone, digital distractions cost the U.S. economy $650 billion dollars in productivity loss per year, which is 15 times more than the cost of absentees, turnover, and sickness. Multitasking, which we all do, results in a deficit of our intellectual cognitive performance of around 10 IQ points. It's basically as debilitating as smoking weed, presumably minus the benefits.

We think and fool ourselves into thinking that we can multitask, but every time you switch from one task to the other and you go back, youve lost the equivalent of 26 minutes of concentration on that task. Technology might improve productivity but sometimes you become more productive if you ignore or have the ability to resist technology as well.

There is a whole new generation in Gen Z who are growing up in the world youve been outlining - with AI and a search for uniqueness. What are some of the challenges they're going to have when trying to find their voice or establish their careers or relationships?

The main challenge will be to demonstrate social proof. If you just enter or start your career, no matter how smart you are, it is a very steep curve to demonstrate to others that you can provide more value than what you can get from AI. You're probably paying a lot of attention to ChatGPT and other forms of GenAI in terms of their ability to produce an article, or an opinion piece. Youre probably, in your area of expertise, able to spot the errors, but the reason you are adding value to that is because of your track record and experience, that actually you know your stuff.

If you're just starting, it's very difficult to persuade people that you have that expertise. Gen Z and the next generation will need to showcase their expertise in a different area or a different way. Expertise is mutating from knowing a lot of answers to asking the right questions - from memorizing and retrieving facts to knowing how, why, and where the facts are wrong. Fundamentally, to make decisions on the basis of information that might be correct or incorrect. Demonstrating and cultivating expertise is a big challenge for the young generations.

I heard that the future artists or engineers wont be coders, theyll be prompt engineers. Theyre going to know how to get the best out of the AI, which at the moment folks like me are walking around with our blindfolds not knowing what it's capable of.

There is an argument to be made that as soon as there's enough prompt engineers prompting AI, AI will learn to prompt itself then we will need to move to the next iteration. There is going to be a very intense cat-and-mouse race or game where as soon as we develop something it can be automated. And we have to develop something else and it can be automated.

Creativity is really critical. Spotify probably has enough data to automate 80% of its artists because it has an algorithm to understand what people like and most music can be pre-processed and done synthetically. Even if it automated 100% of its content, it probably wouldnt kill musicians. It would push artists to invent the next version of music. I think that's how we need to think about every form of performance that is intellectually fueled or creatively or artistically informed.

You touch on popular content in the book, such as Netflix's The Social Dilemma, the famous book Surveillance Capitalism, and of course Black Mirror, which is the modern-day Twilight Zone. What can readers learn from I, Human?

Hopefully they will learn a little bit about AI, especially if they don't have technical backgrounds on it. It's designed for people with no knowledge for people to understand what AI is and what it isnt - to understand how the algorithms that we interact with on a regular basis are reshaping our behavior.

Culture is always a big influence on how we behave. The average person today behaves differently from the average person in the Renaissance, medieval times, or in ancient Greece or Rome even though our hardware or DNA is the same. What I argue is that the current culture could be defined universally as the AI age, and with that comes certain behavioral traits and markers they will discover in their book.

The final part is a call to action, how we need to change if we want to ensure that the AI age is also the human AI age and that we use this technological invention to upgrade ourselves.It finishes on a relatively optimistic note with a call to action to rediscover some of the qualities that make us who we are. AI will probably not harm things like dep curiously, creativity, self-awareness, empathy, and EQ. The argument is that AI will probably win the IQ battle but the EQ battle could be won by humans.

Read more here:

There is probably an 80% consensus that free will is actually ... - CTech

Meta is planning on introducing dozens of chatbot personas … – TechRadar

Meta is gearing up to announce a generative artificial intelligence chatbot (internally dubbed as Gen AI Personas) that is aimed at enticing younger users to the world of AI chatbots. The new chatbot is expected to launch during Metas Connect event on September 27, and will introduce some familiar but dated personas.

The Verge notes that the chatbots will come with different personas that will promote more humanlike, engaging conversations to appeal to younger users. One of the sassy robot personas is inspired by Bender from Futurama and Alvin the Alien.

Meta is planning to add dozens of familiar faces to its chatbot roster and even plans on creating a tool that will enable celebrities to make their own chatbots for their fans. This is good news, as I could finally talk to Beyonce.

Meta is clearly putting a lot of time and effort into perfecting its chatbot game in the budding world of AI. We all remember Snapchat AI, which rose to fame for about a week and then quickly fizzled out into obscurity.

Interestingly, the Wall Street Journal reached out to former Snap and Instagram executive Meghana Dhar, who noted that chatbots dont scream Gen Z to me, but definitely, Gen Z is much more comfortable with new technology. She also adds that Metas goal with the chatbots is likely to be to keep them engaged for longer so it has increased opportunity to serve them ads.

That would explain the rather random selection of young people personas that Meta is going for. While Bender from Futurama is pretty recognizable, hes not exactly a Gen Z icon. As someone from the demographic Meta seems to be targeting, its an extremely odd celebrity to slap onto your product, considering theres a plethora of other (more relevant) personalities to choose from.

The advantage Meta has in picking Gen Z as its target demographic is that Gen Z is very public about who they are super into right now. Meta could have picked literally anyone else, so hopefully the other personalities it has up its sleeve are a bit more contemporary.

Excerpt from:

Meta is planning on introducing dozens of chatbot personas ... - TechRadar

New AI algorithm can detect signs of life with 90% accuracy. Scientists want to send it to Mars – Space.com

Can machines sniff out the presence of life on other planets? Well, to some extent, they already are.

Sensors onboard spacecraft exploring other worlds have the capability to detect molecules indicative of alien life. Yet, organic molecules that hint at intriguing biological processes are known to degrade over time, making their presence difficult for current technology to spot.

But now, a newly developed method based on artificial intelligence (AI) is capable of detecting subtle differences in molecular patterns that indicate biological signals even in samples hundreds of millions of years old. Better yet, the mechanism offers results with 90% accuracy, according to new research.

In the future, this AI system could be embedded in smarter sensors on robotic space explorers, including landers and rovers on the moon and Mars, as well as within spacecraft circling potentially habitable worlds like Enceladus and Europa.

"We began with the idea that the chemistry of life differs fundamentally from that of the inanimate world; that there are 'chemical rules of life' that influence the diversity and distribution of biomolecules," Robert Hazen, a scientist at the Carnegie Institution for Science in Washington D.C. and co-author of the new study, said in a statement. "If we could deduce those rules, we can use them to guide our efforts to model life's origins or to detect subtle signs of life on other worlds."

Related: NASA hopes humanoid robots can help us explore the moon and Mars

The new method relies on the premise that chemical processes that govern the formation and functioning of biomolecules differ fundamentally from those in abiotic molecules, in that biomolecules (like amino acids) hold on to information about the chemical processes that made them. This is likely to hold true for alien life, too, according to the new study.

On any world, life may produce and use higher quantities of a select few compounds to function on a daily basis. This would differentiate them from abiotic systems and it is these differences that can be spotted and quantified with AI, the researchers said in the statement.

The team first trained the machine learning algorithm with 134 samples, of which 59 were biotic and 75 were abiotic. Next, to validate the algorithm, the data was randomly split into a training set and a test set. The AI method successfully identified biotic samples from living things like shells, teeth, bones, rice, human hair as well as from ancient life preserved in certain fossilized fragments made of things like coal, oil and amber.

The tool also identified abiotic samples including chemicals like amino acids that were created in a lab as well as carbon-rich meteorites, according to the new study.

Almost immediately, the new AI method can be used to study the 3.5 billion-year-old rocks in the Pilbara region in Western Australia, where the world's oldest fossils are thought to exist. First found in 1993, these rocks were thought to be fossilized remains of microbes akin to cyanobacteria, which were the first living organisms to produce oxygen on Earth.

If confirmed, the bacteria's presence so early in Earth's history would mean the planet was friendly towards thriving life much earlier than previously thought. However, those findings have remained controversial, as research repeatedly pointed out that the evidence can also be due to pure geological processes having nothing to do with ancient life. Perhaps AI holds the answer.

This research is described in a paper published Monday (Sept. 25) in the journal Proceedings of the National Academy of Sciences.

Read the original:

New AI algorithm can detect signs of life with 90% accuracy. Scientists want to send it to Mars - Space.com

Johns Hopkins experts advise educators to embrace AI and ChatGPT – The Hub at Johns Hopkins

By Emily Gaines Buchler

Artificial intelligence (AI) chatbots like ChatGPT can solve math problems, draft computer code, write essays, and create digital artall in mere seconds. But the knowledge and information spewed by the large language models are not always accurate, making fact-checking a necessity for anyone using it.

Since its launch in November 2022 by OpenAI, ChatGPT has kicked off a flurry of both excitement and concern over its potential to change how students work and learn. Will AI-powered chatbots open doors to new ways of knowledge-building and problem solving? What about plagiarism and cheating? Can schools, educators, and families do anything to prepare?

To answer these and other questions, three experts from Johns Hopkins University came together on Sept. 19 for "Could AI Upend Education?", a virtual event open to the public and part of the Johns Hopkins Briefing Series. The experts included James Diamond, an assistant professor in the School of Education and faculty lead of Digital Age Learning and Educational Technology Programs; Daniel Khashabi, an assistant professor of computer science in the Whiting School of Engineering; and Thomas Rid, a professor of strategic studies in the School of Advanced International Studies and the director of the Alperovitch Institute for Cybersecurity Studies. Lanie Rutkow, vice provost for interdisciplinary initiatives and a professor of health policy and management in the Bloomberg School of Public Health, mediated the conversation.

Here are five takeaways from the discussion:

"The sudden introduction of any new technology into an educational setting, especially one as powerful as [a chatbot with AI], rightly raises concerns," Diamond says. " There are concerns about plagiarism and cheating, [and] a reduced effort among some learners to solve problems and build their own understandings. There are also real concerns about AI perpetuating existing biases and inaccuracies, as well as privacy concerns about the use of technology."

"ChatGPT is a superpower in the classroom, and like power in general, it can either be used for good or for bad," Rid said.

"If we look at human knowledge as an ocean, [then] artificial intelligence and large language models allow us to navigate the deep water more quickly, but as soon as we get close to the ground or shore, the training material in the model is shallow, [and the bot] will start to hallucinate, or make things up. So reliability is a huge problem, and we have to get across to students that they cannot trust the output and have to verify and fact-check."

"[With new and emerging generative AI,] there are some really powerful implications for personalized learning [and] easing work burdens," Diamond said. "There's the potential to foster deeper interest and topics among students. There's also the potential of using [these tools] to create new materials or generate draft materials that learners build off and [use to] explore new ways to be creative."

"You can [use various programs to] identify to what extent what portions of a particular generation [or, say, essay] have been provided by the [large language] model," Khashabi said. "But none of these are robots. None of them are 100% reliable. There are scenarios under which we can say that with some high degree of confidence something has been generated, but for the next few years, as a technologist, I would say, 'Don't count on those.'"

"Parents and caretakers can sit next to their kid and explore a technology like ChatGPT with curiosity, openness, and a sense of wonder, [so] their kids see these tools as something to explore and use [in an experimental way] to create," Diamond said.

"Educators can have discussions with students about what might compel a learner to cheat. [They] can start to develop their students' AI literacy to help them understand what the technology is, what it can and cannot do, and what they can do with it."

"It really is essential that all stakeholdersparents, students, classroom teachers, school administrators, policymakerscome together and have discussions about how this technology is going to get used," Diamond said. "If we don't do that, then we'll wind up in a situation where we have the technology dictating the terms."

Go here to read the rest:

Johns Hopkins experts advise educators to embrace AI and ChatGPT - The Hub at Johns Hopkins