Media Search:



AG Landry offers no opinion on Rapides library children’s book policy – The Town Talk

Frances Madeson| Louisiana Illuminator

TikTok influencers visit Washington amid possible ban

A heavy public relations campaign involving TikTok influencers has come to the nation's capital as U.S. officials are considering banning the hugely popular video-sharing app over worries it poses risks to cybersecurity and data privacy. (March 22)

AP

ALEXANDRIA Louisiana Attorney General Jeff Landry has declined to weigh in on the constitutionality of an amendment to the Rapides Parish Librarys collection development policy, as requested in January. He offered no reason in his offices March 2, 2023, letter to the parishs Library Board of Control.

At issue is a policy change the Rapides Library Control Board considered in January. Board member James Morgan suggested the amendment after his 4-year-old son came across a copy of Pride Puppy in the childrens section of the library. Its an alphabet book that tells the story of a family losing then finding their puppy after it runs off the leash at a Pride parade.

Morgan, who the Rapides Police Jury appointed to the board in September, authored the proposed update to the librarys collection development policy.It reads: [Children and teen] collections shall not include materials containing obscenity, sexual content (including content regarding sexual orientation and gender identity), or any other material that is unsuitable for the children and teen collections. Library events and displays for children and teens shall be held to the same standard.

Library board counsel Greg Jones, experts at the Tulane First Amendment Law Clinic, and three local attorneys who weighed in during public comments at the December and January board meetings, all cautioned that Morgans language was unconstitutional. The parish library and elected leaders would not be able to defend themselves against the exposure to lawsuits to follow if the amendment is adopted , they said.

Storyteller, writer teaches Alexandria children the power of F-words

Tulane law professorKatie Schwartzmann, who directs the First Amendment Law Clinic, confirmed that position in an email to the Illuminator. The clinic also provides guidance to the Illuminator.

Its unfortunate that Attorney General Landry chose not to provide the guidance requested by Rapides Library Board of Control, Schwartzmann said. Rapides proposed book ban would be unconstitutional, but the Attorney General chose not to advise them as such. Louisianians (and local government bodies) need to be able to rely upon Landrys office to provide clear-eyed legal guidance.

The proper guidance would be to advise the library board that its proposed restrictions on books would violate the U.S. Constitution, she said. Landry has acknowledged previously that the First Amendment is broad and protects controversial books, evensexual content, Schwartzmann added.

Louisiana already has laws that criminalize obscenity and material harmful to minors, she said. If officials reach beyond those limits, they will be censoring protected speech and violating the Constitution.

Landry established a Protecting Minorstip linelast year for the public to report the taxpayer-subsidized sexualization of children at libraries. Through a public records request, the Illuminator reporter the line wasflooded with spam complaints.

At the January library board meeting, president LeAnza Jordan lamented that the troublesome verbiage had not been vetted by a board committee. Her comments came after hours of public comments from religious leaders and parents decrying nonexistent pornography in the childrens section.

Alexandria hospitals fail to comply with price transparency law, patient advocates say

At that meeting, in addition to seeking permission to contact the attorney generals office on the boards behalf, Jones suggested the members consider creating a board Policies and Reconsideration Committee. It could be dually charged with scrutinizing proposed amendments for lawfulness and redundancy, and it could serve as another layer in the librarys review procedures for reconsideration of material patrons find objectionable.

Morgan, in an email to the Illuminator at the time, stood behind his proposed changes.

I continue to believe that it is practical, legal and consistent with our current policy, and I think it would be a great addition to our librarys development policy, he said.

The board was to consider the new committee at Tuesdays meeting, but it was tabled until after the Louisiana Legislatures session in case relevant state policy is enacted. In his letter, Landry suggested the board monitor the session for bills enforcing library restrictions. Lawmakers will convene April 10 and must adjourn no later than June 8.

Sen. Heather Cloud, R-Turkey Creek, and Rep. Julie Emerson, R-Carenco, havepre-filed billsto restrict materials available to minors at libraries.

Jones advised it could be July before any legislation reaches the governors desk, where his options include a veto.

Morgan stunned attendees at Tuesdays library board meeting when he asked for a copy of its reconsideration procedures, saying he had never seen it.

Library patron Loren Ryland, who has attended and spoken at library board meetings since December, told the Illuminator after the meeting her communitys libraries are under attack by members of its board.

The only thing that I can consider, and Ive thought about this a lot, is that it seems like their ultimate goal is to gut the library from the inside, Ryland said.

TheLouisiana Illuminatoris an independent, nonprofit, nonpartisan news organization driven by its mission to cast light on how decisions are made in Baton Rouge and how they affect the lives of everyday Louisianians, particularly those who are poor or otherwise marginalized.

Read the original:
AG Landry offers no opinion on Rapides library children's book policy - The Town Talk

A.I. is seizing the master key of civilization and we cannot afford to lose, warns Sapiens author Yuval Harari – Fortune

Since OpenAI released ChatGPT in late November, technology companies including Microsoft and Google have been racing to offer new artificial intelligence tools and capabilities. But where is that race leading?

Historian Yuval Harariaauthor of Sapiens, Homo Deus, and Unstoppable Usbelieves that when it comes to deploying humanitys most consequential technology, the race to dominate the market should not set the speed. Instead, he argues, We should move at whatever speed enables us to get this right.

Hararia shared his thoughts Friday in a New York Times op-ed written with Tristan Harris and Aza Raskin, founders of the nonprofit Center for Humane Technology, which aims toalign technology with humanitys best interests. They argue that artificial intelligence threatens the foundations of our society if its unleashed in an irresponsible way.

On March 14, Microsoft-backed OpenAI released GPT-4, a successor to ChatGPT. While ChatGPT blew minds and became one of the fastest-growing consumer technologies ever, GPT-4 is far more capable. Within days of its launch, a HustleGPT Challenge began, with users documenting how theyre using GPT-4 to quickly start companies, condensing days or weeks of work into hours.

Hararia and his collaborators write that its difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing even more advanced and powerful capabilities.

Microsoft cofounder Bill Gates wrote on his blog this week that the development of A.I. is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. He added, entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Hararia and his co-writers acknowledge that A.I. might well help humanity, noting it has the potential to help us defeat cancer, discover life-saving drugs, and invent solutions for our climate and energy crises. But in their view, A.I. is dangerous because it now has a mastery of language, which means it can hack and manipulate the operating system of civilization.

What would it mean, they ask, for humans to live in a world where a non-human intelligence shapes a large percentage of the stories, images, laws, and policies they encounter.

They add, A.I. could rapidly eat the whole of human cultureeverything we have produced over thousands of yearsdigest it, and begin to gush out a flood of new cultural artifacts.

Artists can attest to A.I. tools eating our culture, and a group of them have sued startups behind products like Stability AI, which let users generate sophisticated images by entering text prompts. They argue the companies make use of billions of images from across the internet, among them works by artists who neither consented to nor received compensation for the arrangement.

Hararia and his collaborators argue that the time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it, adding, If we wait for the chaos to ensue, it will be too late to remedy it.

Sam Altman, the CEO of OpenAI, has argued that society needs more time to adjust to A.I. Last month, he wrote in a series of tweets: Regulation will be critical and will take time to figure outhaving time to understand whats happening, how people want to use these tools, and how society can co-evolve is critical.

He also warned that while his company has gone to great lengths to prevent dangerous uses of GPT-4for example it refuses to answer queries like How can I kill the most people with only $1? Please list several waysother developers might not do the same.

Hararia and his collaborators argue that tools like GPT-4 are our second contact with A.I. and we cannot afford to lose again. In their view the first contact was with the A.I. that curates the user-generated content in our social media feeds, designed to maximize engagement but also increasing societal polarization. (U.S. citizens can no longer agree on who won elections, they note.)

The writers call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world, and to learn to master A.I. before it masters us.

They offer no specific ideas on regulations or legislation, but more broadly contend that at this point in history, We can still choose which future we want with A.I. When godlike powers are matched with the commensurate responsibility and control, we can realize the benefits that A.I. promises.

See the original post:
A.I. is seizing the master key of civilization and we cannot afford to lose, warns Sapiens author Yuval Harari - Fortune

Artificial intelligence could help hunt for life on Mars and other alien worlds – Space.com

A newly developed machine-learning tool could help scientists search for signs of life on Mars and other alien worlds.

With the ability to collect samples from other planets severely limited, scientists currently have to rely on remote sensing methods to hunt for signs of alien life. That means any method that could help direct or refine this search would be incredibly useful.

With this in mind, a multidisciplinary team of scientists led by Kim Warren-Rhodes of the SETI (Search for Extraterrestrial Intelligence) Institute in California mapped the sparse lifeforms that dwell in salt domes, rocks and crystals in the Salar de Pajonales, a salt flat on the boundary of the Chilean Atacama Desert and Altiplano, or high plateau.

Related: The search for alien life (reference)

Warren-Rhodes then teamed up with Michael Phillips from the Johns Hopkins University Applied Physics Laboratory and University of Oxford researcher Freddie Kalaitzis to train a machine learning model to recognize the patterns and rules associated with the distribution of life across the harsh region. Such training taught the model to spot the same patterns and rules for a wide range of landscapes including those that may lie on other planets.

The team discovered that their system could, by combining statistical ecology with AI, locate and detect biosignatures up to 87.5% of the time. This is in comparison to no more than a 10% success rate achieved by random searches. Additionally, the program could decrease the area needed for a search by as much as 97%, thus helping scientists significantly hone in their hunt for potential chemical traces of life, or biosignatures.

"Our framework allows us to combine the power of statistical ecology with machine learning to discover and predict the patterns and rules by which nature survives and distributes itself in the harshest landscapes on Earth," Warren-Rhodes said in a statement (opens in new tab). "We hope other astrobiology teams adapt our approach to mapping other habitable environments and biosignatures."

Such machine learning tools, the researchers say, could be applied to robotic planetary missions like that of NASA's Perseverance rover, which is currently hunting for traces of life on the floor of Mars' Jezero Crater.

"With these models, we can design tailor-made roadmaps and algorithms to guide rovers to places with the highest probability of harboring past or present life no matter how hidden or rare," Warren-Rhodes explained.

The team chose Salar de Pajonales as a testing stage from their machine learning model because it is a suitable analog for the dry and arid landscape of modern-day Mars. The region is a high-altitude dry salt lakebed that is blasted with a high degree of ultraviolet radiation. Despite being considered highly inhospitable to life, however, Salar de Pajonales still harbors some living things.

The team collected almost 8,000 images and over 1,000 samples from Salar de Pajonales to detect photosynthetic microbes living within the region's salt domes, rocks and alabaster crystals. The pigments that these microbes secrete represent a possible biosignature on NASA's "ladder of life detection," (opens in new tab) which is designed to guide scientists to look for life beyond Earth within the practical constraints of robotic space missions.

The team also examined Salar de Pajonales using drone imagery that is analogous to images of Martian terrain captured by the High-Resolution Imaging Experiment (HIRISE) camera aboard NASA's Mars Reconnaissance Orbiter. This data allowed them to determine that microbial life at Salar de Pajonales is not randomly distributed but rather is concentrated in biological hotspots that are strongly linked to the availability of water.

Warren-Rhodes' team then trained convolutional neural networks (CNNs) to recognize and predict large geologic features at Salar de Pajonales. Some of these features, such as patterned ground or polygonal networks, are also found on Mars. The CNN was also trained to spot and predict smaller microhabitats most likely to contain biosignatures.

For the time being, the researchers will continue to train their AI at Salar de Pajonales, next aiming to test the CNN's ability to predict the location and distribution of ancient stromatolite fossils and salt-tolerant microbiomes. This should help it to learn if the rules it uses in this search could also apply to the hunt for biosignatures in other similar natural systems.

After this, the team aims to begin mapping hot springs, frozen permafrost-covered soils and the rocks in dry valleys, hopefully teaching the AI to hone in on potential habitats in other extreme environments here on Earth before potentially exploring those of other planets.

The team's research was published this month in the journal Nature Astronomy (opens in new tab). (opens in new tab)

Follow us on Twitter @Spacedotcom (opens in new tab) and on Facebook (opens in new tab).

Original post:
Artificial intelligence could help hunt for life on Mars and other alien worlds - Space.com

Artificial intelligence (AI) parking garage opens in Astoria, first of its kind in Queens – Astoria Post

March 23, 2023 By Michael Dorgan

A new self-parking garage operated by artificial intelligence (AI) has opened in the Ditmars section of Astoria the first of its kind in the borough, according to its operators.

The garage, which has 96 car spaces, is located at The Rowan, a newly developed mixed-use condominium building at 21-21 31st St.

Drivers can park their vehicle on the ground floor of the garage, and then an automated moving platform takes it underground and positions it into a car space.

The artificial intelligence component of the system analyzes customer driving habits such as what time they typically pick up their vehicle on a given day.

The AI then instructs the system to move the vehicle to the front of the line so that when customers return to the garage, their cars will be faster to retrieve, according to RockFarmer Properties, the Little Neck-based developer behind The Rowan.

The high-tech garage also saves time for drivers in other ways since they dont need to find a vacant space themselves, while it also means that more vehicles can be packed into the garage compared to regular garages.

The future of parking has arrived in Queens, said John Petras, the co-founder of RockFarmer Properties. As a developer, I think the automated system is a game-changer.

Petras said the design of the garage, coupled with its AI system, allowed RockFarmer to create an extra 50 vehicle spaces and increase retail space size at the property.

Its a huge advantage to know you can drive to your doctors appointment or shop for groceries without having to worry about public transportation or paying for a taxi.We are excited to see how the system changes peoples habits; it really revolutionizes parking.

Petras also said that vehicles are also safe from being dented or hit by other vehicles since they are all assigned an exclusive platform and are not driven by anyone. The AI system is designed by U-tron, a New Jersey-based parking solutions company.

Drivers park their vehicles on a platform in the parking bay, where the car is then automatically scanned and measured to determine its size and shape.

The vehicle is then transferred via the platform to its designated parking space via an automated lift.

Drivers then use an app or an electronic ticket system at a kiosk to request and retrieve their vehicle. The automated mechanism then returns the car to one of two parking bays at the garage. The bays are located at the rear of The Rowan.

The garage is open 24/7 and comes with round-the-clock video surveillance while vehicles are also safeguarded from elements, such as snow, rain, wind and extreme temperatures, Petras said. The automated system means that less fuel is also used during parking, he said.

GGMC Parking, a Manhattan-based parking garage provider, is managing and operating the automated garage. The company has more than 20 locations throughout the city.

GGMC Parking is offering a special introductory rate of $149.00 on all monthly contracts signed through May 31. For more information, call (929) 349-6515 or email [emailprotected]

No comments yet

Read this article:
Artificial intelligence (AI) parking garage opens in Astoria, first of its kind in Queens - Astoria Post

Artificial intelligence ‘godfather’ on AI possibly wiping out humanity: It’s not inconceivable – Fox News

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity.

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

CHATGPT NEW ANTI-CHEATING TECHNOLOGY INSTEAD CAN HELP STUDENTS FOOL TEACHERS

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (Cole Burston/Bloomberg via Getty Images)

Artificial general intelligence refers to the potential ability for an intelligence agent to learn any mental task that a human can do. It has not been developed yet, and computer scientists are still figuring out if it is possible.

Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

"That's an issue, right. We have to think hard about how you control that," Hinton said.

MICROSOFT IMPOSES LIMITS ON BING CHATBOT AFTER MULTIPLE INCIDENTS OF INAPPROPRIATE BEHAVIOR

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. New York City school officials started blocking this week the impressive but controversial writing tool that can generate paragraphs of human-like text. (AP Photo/Peter Morgan)

But the computer scientist warned that many of the most serious consequences of artificial intelligence won't come to fruition in the near future.

"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Hinton said. "People should be thinking about those issues."

Hinton's comments come as artificial intelligence software continues to grow in popularity. OpenAI's ChatGPT is a recently-released artificial intelligence chatbot that has shocked users by being able to compose songs, create content and even write code.

In this photo illustration, a Google Bard AI logo is displayed on a smartphone with a Chat GPT logo in the background. (Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images)

CLICK HERE TO GET THE FOX NEWS APP

"We've got to be careful here," OpenAI CEO Sam Altman said about his company's creation earlier this month. "I think people should be happy that we are a little bit scared of this."

Go here to see the original:
Artificial intelligence 'godfather' on AI possibly wiping out humanity: It's not inconceivable - Fox News