Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence, Real Consequences: The use of Artificial Intelligence platforms in higher-education – The Justice

Before I began to write this article, one of my professors had given me the suggestion to use ChatGPT to create a title for this piece. I did not do that, and will be very offended if you think I did. However, I did decide to give ChatGPT a chance and typed, Can you please create a title for a school newspaper article which features three interviews with professors at Brandeis University discussing the potential benefits and drawback of ChatGPT in their respective fields of study and the classrooms in which they teach in? In response, I got:

Exploring the Impact of ChatGPT: Perspectives from Brandeis University Professors

Aside from the use of title making and the temptation of lightening ones neverending workload, AI usage has been a rising concern in the education sector, which can both serve as a resource and threaten the purpose of education in the first place. I was able to speak with three Brandeis professors community, all teaching different subjects and with different experiences regarding the use of ChatGPT and other forms of artificial intelligence in their classrooms.

On Feb. 13, I spoke with Prof. Elizabeth Bradfield (ENG). As a poet, Bradfield believes that AI should have no role in the creative process of writing poetry and other creative writing pieces. ChatGPT could be useful for things like getting lists of poems or finding useful information for a poem, but Bradfield said,I still have to do the reading and the thinking. She said using artificial intelligence would be the opposite of creating art.

When talking about the joy and emotions that accompany writing and the writing process, Bradfield added on, Why would I give that away to AI? As an educator, Bradfield would not encourage her students in the use of AI to create a poem. If she found out that someone had handed her a poem created by AI, Bradfield stated that it would be a huge betrayal of trust. And why would I want to waste my time writing feedback for an AI poem?

After speaking with Bradfield, I also got the opportunity to have a conversation with Prof. Dylan Cashman (COSI) on Feb. 29. Cashman teaches two Computer Science elective courses as well as a few introductory courses. When first discussing the invention of AI and its rising popularity, Cashman stated that it has changed a lot of peoples lives, regarding the ethical and professional questions that have risen out of its increased usage. When asked about what measures he would take in the case that a student handed a coded assignment using AI, Cashman replied with, I think we are still learning what to do in that case.

On the use of artificial intelligence in elective computer science courses versus introductory ones, Cashman said his greatest concern with the usage of AI in computer science classrooms would be, Do you care about the product that they are producing, or the process that they undergo while doing it? And I think its a case by case basis by class.

Cashman also mentioned the concern of the fairness of grading when grading an assignment with AI usage versus one without one, as many AI detecting softwares are not very accurate, according to Cashman. An increasing concern for Cashman has been maintaining the essence of the learning process, where he stated, In a formative assessment: I want them to hit a wall and I want them to get over that wall. That is truly the value of education. If someone uses AI I worry about that a lot.

However, Cashman believes that in some cases, like editing, writing and advanced electives more concerned with short-term research, using artificial intelligence can have an optimistic outcome. As a final remark, Cashman stated, I think people are trying to decide what policies and cultural norms about AI should be based on how AI is being used right now. And people should get aware of how it will get better.

Finally, on March 1, I was able to speak briefly about AI in the field of legal studies with Prof. Douglas Smith (LGLS), who began working at Brandeis as a Guberman Teaching Fellow. Smith works as the director of Legal and Education Programs with The Right to Immigration Institute. When asked about the use of AI in his professional career, Smith replied, I used it at a conference we just had, a law and society conference in Puerto Rico. I think its great. I dont think I would rely on it, but its great to talk to.

As an educator, Smith is not opposed to the use of ChatGPT by his students when used properly. I love ChatGPT. I encourage students to use it as a tool, as a research tool, and as a research tool they should cite it, said Smith.

From the various insights of these three educators, the common consensus seems to be that we are still figuring it out. ChatGPT and other artificial intelligence platforms and applications can be useful as a guide or aiding resource, but also presents bigger problems like corrupting academic integrity and presenting bigger implications for professional fields such as medicine and law.

Editors Note: Justice Arts & Culture Editor Nemma Kalra 26 is associated with The Right to Immigration Institute and was not consulted, did not contribute to, nor edit any parts of this article.

Read the original post:
Artificial Intelligence, Real Consequences: The use of Artificial Intelligence platforms in higher-education - The Justice

A.I. Is Learning What It Means to Be Alive – The New York Times

In 1889, a French doctor named Francois-Gilbert Viault climbed down from a mountain in the Andes, drew blood from his arm and inspected it under a microscope. Dr. Viaults red blood cells, which ferry oxygen, had surged 42 percent. He had discovered a mysterious power of the human body: When it needs more of these crucial cells, it can make them on demand.

In the early 1900s, scientists theorized that a hormone was the cause. They called the theoretical hormone erythropoietin, or red maker in Greek. Seven decades later, researchers found actual erythropoietin after filtering 670 gallons of urine.

And about 50 years after that, biologists in Israel announced they had found a rare kidney cell that makes the hormone when oxygen drops too low. Its called the Norn cell, named after the Norse deities who were believed to control human fate.

It took humans 134 years to discover Norn cells. Last summer, computers in California discovered them on their own in just six weeks.

The discovery came about when researchers at Stanford programmed the computers to teach themselves biology. The computers ran an artificial intelligence program similar to ChatGPT, the popular bot that became fluent with language after training on billions of pieces of text from the internet. But the Stanford researchers trained their computers on raw data about millions of real cells and their chemical and genetic makeup.

The researchers did not tell the computers what these measurements meant. They did not explain that different kinds of cells have different biochemical profiles. They did not define which cells catch light in our eyes, for example, or which ones make antibodies.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See more here:
A.I. Is Learning What It Means to Be Alive - The New York Times

The Adams administration quietly hired its first AI czar. Who is he? – City & State New York

New York City has quietly filled the role of director of artificial intelligence and machine learning, City & State has learned. In mid-January, Jiahao Chen, a former director of AI research at JPMorgan Chase and the founder of independent consulting company Responsible AI LLC, took on the role, which has been described by the citys Office of Technology and Innovation as spearheading the citys comprehensive AI strategy.

Despite Mayor Eric Adams administration publicizing the position last January, Chens hiring nearly a year later came without any fanfare or even an announcement. The first mention of Chen as director of AI came in a press release sent out by the Office of Technology and Innovation on Thursday morning, announcing next steps in the citys AI Action Plan. OTI Director of AI and Machine Learning Jiahao Chen will manage implementation of the Action Plan, the press release noted.

New York City previously had an AI director under former Mayor Bill de Blasios administration. Neal Parikh served as the citys director of AI under the office of former Chief Technology Officer John Paul Farmer, which released a citywide AI strategy in 2021. Under de Blasio, the city also had an algorithms management and policy officer to guide the city in the development, responsible use and assessment of algorithmic tools, which can include AI and machine learning. The old CTOs office and the work of the algorithms officer was consolidated along with the citys other technology-related offices into the new Office of Technology and Innovation at the outset of the Adams administration.

The Adams administration has referred to its own director of AI and machine learning as a new role, however, and has suggested that the position will be more empowered, in part because it is under the larger, centralized Office of Technology and Innovation. According to the job posting last January, which noted a $75,000 to $140,000 pay range, the director will be responsible for helping agencies use AI and machine learning tools responsibly, consulting with agencies on questions about AI use and governance, and serving as a subject matter expert on citywide policy and planning, among other things. How the role will actually work in practice remains to be seen.

The Adams administrations AI action plan was published in October, and isa 37-point road map aimed at helping the city responsibly harness the power of AI for good. On Thursday, the Office of Technology and Innovation announced the first update on the action plan, naming members of an advisory network that will consult on the citys work. That list includes former City Council Member Marjorie Velzquez, who is now vice president of policy at Tech:NYC. The office also released a set of AI principles and definitions, and guidance on generative AI.

OTI spokesperson Ray Legendre said that an offer for the position of director of AI was extended to Chen before the citys hiring freeze began last October. The office did not explicitly address why Chens hiring wasnt announced when he started the role. Over the past two months, Jiahao has been a key part of our ongoing efforts to implement the AI Action Plan, Legendre wrote in an email. Our focus at OTI over the past few months has been on making progress on the Action Plan which is what we announced today.

According to the website for Responsible AI LLC, Chens independent consulting company, Chens resume includes stints in academia as well as the private sector, including as a senior manager of data science at Capital One, and as director of AI research at JPMorgan Chase.

After City & State inquired about Chens role, Chen confirmed it on X, writing I can finally talk about my new job!

Original post:
The Adams administration quietly hired its first AI czar. Who is he? - City & State New York

Report: Artificial Intelligence A Threat to Climate Change, Energy Usage and Disinformation – Friends of the Earth

March 7, 2024

WASHINGTON Today, partners in the Climate Action Against Disinformation coalition released a report that maps the risks that artificial intelligence poses to the climate crisis.

Topline points:

AI companies spread hype that they might save the planet, but currently they are doing just the opposite, said Michael Khoo, Climate Disinformation Program Director at Friends of the Earth. AI companies risk turbocharging climate disinformation, and their energy use is causing a dangerous increase to overall US consumption, with a corresponding increase of carbon emissions.

We are already seeing how generative AI is being weaponized to spin up climate disinformation or copy legitimate news sites to siphon off advertising revenue, said Sarah Kay Wiley, Director of Policy at Check My Ads, Adtech companies are woefully unprepared to deal with Generative AI and the opaque nature of the digital advertising industry means advertisers are not in control of where their ad dollars are going. Regulation is needed to help build transparency and accountability to ensure advertisers are able to decide whether to support AI generated content.

The evidence is clear: the production of AI is having a negative impact on the climate. The responsibility to address those impacts lie with the companies producing and releasing AI at a breakneck speed, said Nicole Sugerman, Campaign Manager at Kairos Fellowship. We must not allow another move fast and break things era in tech; weve already seen how the rapid, unregulated growth of social media platforms led to previously unimaginable levels of online and offline harm and violence. We can get it right this time, with regulation of AI companies that can protect our futures and the future of the planet.

The climate emergency cannot be confronted while online public & political discourse is polluted by fear, hate, confusion and conspiracy, said Oliver Hayes, Head of Policy & Campaigns at Global Action Plan. AI is supercharging these problems, making misinformation cheaper and easier to produce and share than ever before. In a year when 2 billion people are heading to the polls, this represents an existential threat to climate action. We should stop looking at AI through the benefit-only analysis and recognise that, in order to secure robust democracies and equitable climate policy, we must rein in big tech and regulate AI.

The skyrocketing use of electricity and water, combined with its ability to rapidly spread disinformation, makes AI one of the greatest emerging climate threat-multipliers, said Charlie Cray, Senior Strategist at Greenpeace USA, Governments and companies must stop pretending that increasing equipment efficiencies and directing AI tools towards weather disaster responses are enough to mitigate AIs contribution to the climate emergency.

Previously, the coalition submitted letters to President Biden and Senator Chuck Schumer that call on them to implement climate concerns into proposed AI legislation. The letters echo recommendations made in the report, including:

Communications contact: Erika Seiber, [emailprotected]

See the rest here:
Report: Artificial Intelligence A Threat to Climate Change, Energy Usage and Disinformation - Friends of the Earth

Artificial-intelligence tool shows high accuracy for diagnosing ear infections – University of Minnesota Twin Cities

Acute otitis media (ear infection) is one of the most common infections in children and a top indication for antibiotics, but diagnostic accuracy is relatively low, despite an ongoing search for ways to improve clinical skillseverything from training programs to identifying serum biomarkers.

But now, results from a large study of children suggest that an artificial intelligence (AI)-based tool to help interpret tympanic membrane (TM, or eardrum) videos during the clinical exam may boost accuracy and reduce unnecessary antibiotic prescribing. Researchers based at the University of Pittsburgh Medical center published their findings this week in JAMA Pediatrics.

A research team based at the University of Pittsburgh Medical Center developed a medical grade smartphone application that uses a smartphone camera to capture video otoscope of the tympanic membrane through an endoscope or otoscope. Using the app, the scientists collected a training library of otoscope assessments of children younger than 36 months old who were seen for sick or well visits at two pediatric clinics near Pittsburgh in 2018 and 2019.

Two validated otoscopists reviewed the video and assigned a final diagnosis. They excluded samples when the TM was almost completely occluded by earwax or if the video was out of focus.

Using 1,151 videos from 365 children and the diagnosis information, the researchers developed an AI algorithm to evaluate TM features on the videos and make a diagnosis.

They found that the AI tool has a sensitivity of 93.8% (95% confidence interval [CI], 92.6% to 95.0%) and a specificity of 93.3% (95% CI, 92.5% to 94.1%). The team also administered a questionnaire to parents, who were favorable about the AI tool: 80% wanted the doctor to use the AI tool during future visits. Comments from parent interviews were mostly positive.

AI accuracy was better than that of pediatricians, primary care physicians, and advance-practice clinicians, and the authors wrote that the tool could reasonably be used in those settings to help with decisions about treatment.

They said other advantages include use by trained nonphysicians, documentation for the electronic health record, and discussion with parents.

Improved diagnosis can help reduce inappropriate use of antimicrobials for this frequently diagnosed condition.

"Improved diagnosis can help reduce inappropriate use of antimicrobials for this frequently diagnosed condition," the group added.

In a related commentary in the same issue, Hojjat Salmasian, MD, PhD, MPH, and Lisa Biggs, MD, both with the Children's Hospital of Philadelphia, said that, of 692 AI-enabled medical devices approved by the Food and Drug Administration, only a few apply to pediatrics, and, of those only 2 are designed for ear, nose, and throat exams.

They wrote that the strengths of the study are the large dataset of video recordings and validation obtained with different instruments. They noted a few drawbacks, however, such as how training and testing data were selected and that expert otoscopists were the gold standard, rather than myringotomy and tympanocentesis. (The study's researchers opted not to use the procedures as the reference standard, because they are invasive and not practical for use in a large cohort of children.)

"Nevertheless, the high accuracy of the algorithm, at least in this retrospective analysis, as well as its implementation as a mobile application that could be used in real time, can lead to the hope that diagnosis of otitis media could be transformed using such technology," the two wrote.

The high accuracy of the algorithm ... lead to the hope that diagnosis of otitis media could be transformed using such technology.

Before the AI model for diagnosing otitis media reaches clinical practice, it needs to be studied proactively and compared with clinician performance, Salmasian and Biggs wrote.

Also, fairness and bias need to be studied, as well, they said. For example, an AI model trained on lighter skin tones might make it less accurate for patients who have darker skin tones.

Aside from improving efficiency in time-limited clinical settings, a more accurate AI-guided tool could also have other benefits.

"The model could promote antibiotic stewardship as well, since the ability to show a parent the visual findings may have significant impact on the parents acceptance of a treatment without antibiotics," Salmasian and Biggs wrote.

Go here to see the original:
Artificial-intelligence tool shows high accuracy for diagnosing ear infections - University of Minnesota Twin Cities