Media Search:



Artificial intelligences future value in environmental remediation – The Miami Hurricane

Artificial intelligence is enabling us to rethink how we integrate information, analyze data and use the resulting insights to improve decision-making. The power of AI is revolutionizing various industries, and environmental science is no exception.

With increasing threats of environmental stressors, AI is emerging as a powerful tool in detecting, mapping and mitigating these effects for the future.

As AI increasingly drives innovation and becomes a facet of everyday life, fears about its capabilities are growing.

It doesnt help that the media and pundits are stoking those fears, suggesting that AI could take over the world, lead to losses of control and privacy and devalue the importance of humans in the workforce.

According to Business News Daily, 69% of people worry that AI could take over their jobs entirely, while 74% predict that AI will eliminate all forms of human labor. However, its potential to remedy environmental problems can be a beneficial use of the technology.

From monitoring air and water quality to predicting the spread of pollutants, AI is already playing a crucial role in safeguarding our environment and public health.

As 2030 quickly approaches, the agreed deadline for hitting climate targets, the world is on track to achieve only 12 percent of the Sustainable Development Goals (SDGs), with progress plateauing or regressing on over half of the set goals.

How can we use artificial intelligence the technology that is revolutionizing the production of knowledge to actually improve lives; to make the world a little bit safer, a little bit healthier, a little bit more prosperous; to help eliminate poverty and hunger; to promote health and access to quality education; to advance gender equity; to save our planet, said Secretary of State of the United States Anthony Blinken, at the 78th Session of the United Nations General Assembly.

The most prominent applications of AI are currently in detecting, mapping and mitigating environmental toxins and pressures, which can help engineers and scientists gather more accurate data, but its uses are constantly growing and developing.

AI can help automate the process of taking and analyzing samples, and recognizing the presence of specific toxins in water, soil or air, so it can report real-time status. In delicate ecosystems, such as coral reefs and wetlands, including those around Florida, studying the parameters of the environment can alert to harmful conditions and propel action.

AI models can also create analytical maps based on historical or statistical data to understand trends and trajectories regarding toxin levels, weather patterns, human activities and other relevant factors. Those models can also evaluate satellite imagery to identify areas where specific conditions may be present and be trained to recognize patterns or changes, which can be extremely important in forecasting future dangerous weather events, enhancing agricultural productivity to combat hunger, responding to disease outbreaks, and addressing other imminent climate change threats to Earth.

These technologies can be also used to identify the sources and pathways of toxins and optimize mitigation strategies, crucial for effective mitigation and intervention, while monitoring the success of mitigation efforts.

If these practices for AI are deployed effectively and responsibly, they can drive inclusive and sustainable growth for all, which can reduce poverty and inequality, advance environmental sustainability and improve lives around the world.

However, real concerns exist that the developing world is being left behind as AI advances rapidly. If not distributed equitably, the technology has the potential to exacerbate inequality.

Countries must work together to promote access to AI around the world, with a particular focus on developing countries. Industrialized nations should share knowledge that can advance progress toward achieving SDGs, as AI has the potential to advance progress on nearly 80 percent of them.

To succeed in directing AI toward achieving the SDGs, complete support and participation from the multistakeholder community of system developers, governments and organizations, and communities is required.

Meanwhile, the need for AI governance is imperative, and support from federal and state governments as well as corporations is crucial to this transition. As AIs footprint grows and nations work to manage risks, we must maximize its use for the greater good and deepen cooperation across governments to foster beneficial uses for AI.

The United States is committed to supporting and accelerating efforts on AI development, hoping to foster an environment where AI innovation can continue to flourish. Secretary Blinken mentioned the U.S.s creation of a blueprint for an AI Bill of Rights and Risk Management Framework at the UNGA, which would guide the future use, design and safeguards for these systems.

The US has announced a $15 million commitment, designated to helping more governments leverage the power of AI to drive global good, focused specifically on the SDGs. Commitments and contributions have been made by other countries and large corporations, such as Google, IBM and Microsoft.

We are at an inflection point, and the decisions we make today will affect the world for decades to come, especially when it comes to AI and climate change. AI has the potential to accelerate progress, an immense responsibility to be taken by governments, the private sector, civil society and individuals that must consider the social, economic and environmental aspects of sustainability.

Lia Mussie is a senior majoring in ecosystem science and policy and political science with minors sustainable business and public health.

Read more:
Artificial intelligences future value in environmental remediation - The Miami Hurricane

Researchers develop a way to hear photos using artificial intelligence – KXLH News Helena

Researchers at Northeastern University have developed a way to extract audio from both still photos and muted videos using artificial intelligence.

The research project is calledSide Eye.

Most of the cameras today have what's called image stabilization hardware, said Kevin Fu, a professor of electrical and computer engineering at Northeastern University. It turns out that when you speak near a camera lens that has some of these functions, a camera lens will move every so slightly, what's called modulating your voice, onto the image and it changes the pixels.

Basically, these small movements can be interpreted into rudimentary audio that Side Eye artificial intelligence can then interpret into individual words with high accuracies, according to the research team.

You're able to get thousands of samples per second. What does this mean? It means you basically get a very rudimentary microphone, Fu said.

SEE MORE: Companies plan to build largest image-based AI model to fight cancer

Even though the recovered audio sounds muffled, some pieces of information can be extracted.

Things like understanding what is the gender of the speaker, not on camera but in the room while the photograph or video is being taken, that's nearly 100% accurate, he said.

So what can technology like this be used for?

For instance in legal cases or in investigations of either proving or disproving somebodys presence, it gives you evidence that can be backed up by science of whether somebody was likely in the room speaking or not, Fu said.

This is one more tool we can use to bring authenticity to evidence, potentially to investigations, but also trying to solve criminal applications, he said.

Trending stories at Scrippsnews.com

Read more here:
Researchers develop a way to hear photos using artificial intelligence - KXLH News Helena

AI is already helping astronomers make incredible discoveries … – Space.com

World Space Week 2023 is here and Space.com is looking at the current state of artificial intelligence (AI) and its impact on astronomy and space exploration as the space age celebrates its 66th anniversary. Here, Paul Sutter discusses how AI is already helping astronomers make new, incredible discoveries.

Whether we like it or not, artificial intelligence will change the way we interact with the universe.

As a science, astronomy has a long tradition of looking for patterns by sifting through massive amounts of data, accidental discoveries, and a deep connection between theory and observation. These are all areas where artificial intelligence systems can make the field of astronomy faster and more powerful than ever before.

That said, it's important to note that "artificial intelligence" is a very broad term encompassing a wide variety of semi-related software tools and techniques. Astronomers most commonly turn to neural networks, where the software learns about all the connections in a training data set, then applies the knowledge of those connections in a real data set.

Related: How artificial intelligence is helping us explore the solar system

Take, for instance, data processing. The pretty pictures splashed online from the Hubble Space Telescope or James Webb Space Telescope are far from the first pass that those instruments took of that particular patch of sky.

Raw astronomical images are full of errors, messy foregrounds, contaminants, artifacts, and noise. Processing and cleaning these images to make something presentable not to mention useful for scientific research requires an enormous amount of input, usually done partially manually and partially by automated systems.

Increasingly astronomers are turning to artificial intelligence to process the data, pruning out the useless bits of the images to produce a clean result.For example, an image of the supermassive black hole at the heart of thegalaxy Messier 87 (M87) first released in 2019 was given a machine learning "makeover" in April 2023, resulting in a much clearer image of the black hole's structure.

In another example, some astronomers will feed images of galaxies into a neural network algorithm, instructing the algorithm with the classification scheme for the discovered galaxies. The existing classifications came from manual assignments, either by the researchers themselves or by volunteer citizen science efforts. Training set in hand, the neutral network can then be applied to real data and automatically classify the galaxies, a process that is far faster and much less error prone than manual classification.

Astronomers can also use AI to remove the optical interference created by Earth's atmosphere from images of space taken by ground-based telescopes.

AI has even been proposed to help us spot signatures of life on Mars, understand why the sun's corona is so hot, or reveal the ages of stars.

Astronomers are also using neural networks to dig deeper into the universe than ever before. Cosmologists are beginning to employ artificial intelligence to understand the fundamental nature of the cosmos. Two of the biggest cosmic mysteries are the identities of dark matter and dark energy, two substances beyond our current knowledge of physics that combined take up over 95% of all the energy contents throughout the universe.

To help identify those strange substances, cosmologists are currently trying to measure their properties: How much dark matter and dark energy there is, and how they've changed over the history of the universe. Tiny changes in the properties of dark matter and dark energy have profound effects on the resulting history of the cosmos, touching everything from the arrangement of galaxies to the star formation rates in galaxies like our Milky Way.

Neural networks are aiding cosmologists in disentangling all the myriad effects of dark matter and dark energy. In this case, the training data comes from sophisticated computer simulations. In those simulations cosmologists vary the properties of dark matter and dark energy and see what changes. They then feed those results into the neural network so it can discover all the interesting ways that the universe changes. While not quite yet ready for primetime, the hope is that cosmologists could then point the neural network at real observations and allow it to tell us what the universe is made of.

Approaches like these are becoming increasingly critical as modern astronomical observatory churn out massive amounts of data. The Vera C. Rubin Observatory, a state-of-the-art facility under construction in Chile, will be tasked with providing over 60 petabytes (with one petabyte equaling one thousand terabytes) of raw data in the form of high-resolution images of the sky. Parsing that much data is beyond the capabilities of even the most determined of graduate students. Only computers, aided by artificial intelligence, will be up to the task.

Of particular interest to that upcoming observatory will be the search for the unexpected. For example, the astronomer William Herschel discovered the planet Uranus by accident during a regular survey of the night sky. Artificial intelligence can be used to flag and report potentially interesting objects by identifying anything that doesn't fit an established pattern. And in fact, astronomers have already used AI to spot a potentially dangerous asteroid using an algorithm written specifically for the Vera C. Rubin observatory.

Who knows what future discoveries we will ultimately have to credit to a machine?

Go here to see the original:
AI is already helping astronomers make incredible discoveries ... - Space.com

Domino’s and Microsoft are working together on artificial intelligence – Restaurant Business Online

Domino's plans to start testing some AI strategies within the next six months. | Photo courtesy of Domino's

Dominos and Microsoft want to use AI to improve the pizza ordering process.

The Ann Arbor, Mich.-based pizza chain and the Redmond, Wash.-based tech giant on Tuesday announced a deal to work together on AI-based strategies to improve the ordering process. Dominos expects to test new generative AI-based technology in its stores within the next six months.

The companies said they would use Microsoft Cloud and the Azure OpenAI Service to improve the ordering process through personalization and simplification.

Dominos has already been experimenting with AI to modernize store operations. The company said that it is in the early stages of developing a generative AI assistant with Azure to help store managers with inventory management, ingredient ordering and scheduling.

The company also plans to streamline pizza preparation and quality controls with more predictive tools. The idea is to free store managers time so they work more with employees and customers.

Our collaboration over the next five years will help us serve millions of customers with consistent and engaging ordering experiences, while supporting our corporate stores, franchisees and their respective team members with tools to make store operations more efficient and reliable, Kelly Garcia, Dominos chief technology officer, said in a statement.

Dominos and Microsoft plan to establish an Innovation Lab pairing company leaders with world class engineers to accelerate the time to market for store and ordering innovations. The companies also say they are committed to responsible AI practices that protect customer data and privacy.

As consumer preferences rapidly evolve, generative AI has emerged as a game changer for meeting new demands and transforming the customer experience, said Shelley Bransten, VP global retail, consumer goods and gaming with Microsoft.

Artificial intelligence has become increasingly common inside restaurants, with chains using the technology to take orders, do back-of-house tasks and make recommendations to customers. Large-scale chains in particular are in something of an arms race to find more uses for AI inside their restaurants to lower labor costs and improve customer service.

Members help make our journalism possible. Become a Restaurant Business member today and unlock exclusive benefits, including unlimited access to all of our content. Sign up here.

Restaurant Business Editor-in-Chief Jonathan Maze is a longtime industry journalist who writes about restaurant finance, mergers and acquisitions and the economy, with a particular focus on quick-service restaurants.

Read the original:
Domino's and Microsoft are working together on artificial intelligence - Restaurant Business Online

Speaker Lectures on artificial intelligence The Collegian – SDSU Collegian

Arijit (Ari) Sen encouraged the use of Artificial Intelligence to augment reporters and humans, but not to replace them during his Pulitzer Center Crisis Reporting Lecture held on Sept. 28, at the Lewis and Clark room located in South Dakota State Universitys Student Union.

Sen, an award-winning computational journalist at The Dallas Morning News and a former A.I. accountability fellow at the Pulitzer Center, discussed ways A.I. could be used and potential risks of A.I. in journalism.

A.I. is often vaguely defined, and I think if you listen to some people, its like the greatest thing since slice bread, Sen said as he proceeded to quote Sundar Pichai, CEO of Googles parent company, Alphabet, stating A.I. as the most profound technology humanity is working on.

According to Sen, A.I. is basically machine learning that teaches computers to use fancy math for finding patterns in data. Once the model is trained, it can be used to generate a number and predict something by putting things into categories.

Sen feels that a rather important question to focus on is how A.I. is being used in the real world and what are the real harms happening to people from this technology.

There is a really interesting thing happening right now. Probably about since 2015, A.I. is starting to be used in investigative journalism specifically, Sen said, as he speaks about a story reported by the Atlanta Journal-Constitution (AJC) on doctors and sex abuse, where around 100,000 disciplinary complaints had been received against doctors. Due to numerous amounts of complaints, AJC used a machinery model to train and feed in data about complaints that were related to sexual assault and those that were not related to make it easier for them and compile a story.

Although, A.I. can prove to be useful for investigative journalism, Sen explained about the risks of A.I. technology and questions pertaining to people behind the model. He talked about factors about people who label data, intentions of the A.I. creator and humans working on the same content with a longer time frame.

The other question we need to think about when working with an A.I. model is asking if a human could do the same thing if we gave them unlimited amount of time on a task, Sen said. And if the answer is no, then what makes us think that an A.I. model could do the same thing.

Sen further elaborated on A.I. bias and fairness by bringing in another case study of how Amazon scrapped its secret A.I. recruiting tool after it showed bias against women. Amazon used its current engineers resume as training data to recruit people; however, they realized that most of their existing engineers were men which caused the A.I. to have a bias against women and rank them worse than male candidates.

One of the cool things about A.I. in accountability reporting is that were often using A.I. to investigate A.I., Sen said as he dives into his major case study on the Social Sentinel.

Sen described Social Sentinel, now known as Navigate360, an A.I. social media monitoring technology tool used by schools and colleges to scan for threats of suicides and shootings.

Well, I was a student, just like all of you at University of North Carolina at Chapel Hill (UNC) and there were these protests going on, Sen said. You know, I being the curious journalist that I was, I wanted to know what the police were saying to each other behind the scenes.

Sens curiosity led to him putting in a bunch of records requests and receiving around 1,000 pages in the beginning. He ended up finding a contract between his college and Social Sentinel that led him to wonder about his college using a sketchy A.I. tool. Sen landed an internship at NBC and wrote the story which had been published in Dec. 2019.

Around that time, I was applying for journalism at grad school, and I mentioned this in my application at Berkeley, Sen said. I was like, this is why I want to go to grad school; I want two years to report this out because I knew that straight out of undergrad no one was going to hire me to do that story.

He recalls that he spent his first year doing a clip search on reading about Social Sentinel and found out about no one looking at colleges, which he stated was weird as the company had been started by two college campus police chiefs. The remainder of time he spent was calling colleges and writing story pitches.

Sen added details on his second year at Berkeley, where he was paired up with his thesis advisor David Barstow and conducted tons of record requests from all over the country for at least 36 colleges and every four-year college in Texas.

We ended up with more than 56,000 pages of documents by the end of the process, Sen exclaimed.

After having all documents prepared, Sen went on to build databases in spreadsheets, and analyzed Social Sentinels alerts sent as PDFs. He later began analyzing tweets to check for threatening content and look for common words after filtering out punctuation and common words.

You can see the most common word used was shooting and you can see that would make sense, Sen said. But a lot of times shooting meant like shooting the basketball and things like that.

With all this information acquired, Sen got going on speaking with experts, former company employees of Social Sentinel, colleges that used the service, students and activists who were surveilled.

Through this reporting, Sen came up with three findings. One, being the major was that the tool not really being used effectively to prevent suicide and shootings but was used to monitor protests and activists. Second, Social Sentinel was trying to expand beyond social media such as Gmail, Outlook etc. Lastly, the tool showed little evidence of lives saved, although Social Sentinel claimed that they were doing great.

Sen concluded that the impact of the story reached various media houses who later published on A.I. monitoring student activities and eventually, UNC stopped using the service. Sen later took on questions from the audience.

According to Joshua Westwick, director for the School of Communication and Journalism, the lecture was timely, especially considering the increased conversations about AI.

Ari Sens lecture was both engaging and informative. The examples that he shared illuminated the opportunities and challenges of AI. Westwick said. I am so grateful we could host Ari through our partnership with the Pulitzer Center.

Westwick further explained that the lecture was exceptionally important for students and attendees as A.I. is present throughout many different aspects of our lives.

As journalists and consumers, we need to understand the nuances of this technology, Westwick said. Specifically, for our journalism students, understanding the technology and how to better report on this technology will be important in their future careers.

Greta Goede, editor-in-chief for the Collegian, described the lecture as one of the best lectures she has attended. She explained how the lecture was beneficial to her as Sen spoke about investigative journalism and how to look up key documents before writing a story.

He (Sen) talked a lot about how to get data and how to organize it, which was really interesting to me since I will need to learn those skills as I go further into my career, Goede said. I thought it was a great lecture and enjoyed attending.

More:
Speaker Lectures on artificial intelligence The Collegian - SDSU Collegian