Archive for the ‘Artificial Intelligence’ Category

Director’s Blog: the latest from USPTO leadership – United States Patent and Trademark Office

With artificial intelligence speeding the innovation process, what does that mean for invention and a properly balanced patent system?

Blog by Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO

Artificial intelligence (AI) is one of the most powerful technologies of our generation, and it presents big opportunities and risks.At the USPTO, weare working on the responsible introduction of new AI into our workflow and organizational excellence, and are working across government and closely with the Department of Commerce on AI.

Through our AI and Emerging Technology (ET) Partnership, we are also looking closely at the growing role of AI in innovation and its potential to dramatically affect our lives, improve our countrys competitiveness, economic prosperity, and national security. Our AI/ET Partnership supports the Biden Administrations whole-of-government approach to AI, including the National AI Initiative to advance U.S. leadership in AI.

As AI assumes a larger and larger role in innovation, and given recent developments and the current trajectory in innovation in AI, we are presented with new questions. If an AI system can contribute to an invention at the same level as a human, is the invention patentable under current law? Does allowing AI systems to be listed as inventors promote and incentivize innovation? Should the USPTO require applicants to provide an explanation of contributions AI systems made to inventions claimed in patent applications? These are some of the questions we are asking in our recent request for comments on AI and inventorship.

For the last several years, the USPTO has been exploring these and other questions about the role of AI in innovation. And we know that role is increasing. We recently analyzed all of our patents to study the impact that artificial intelligence is having on technology development in the United States and the world. We found that 80,000 of our utility patent applications in 2020 involved artificial intelligence 150% higher than in 2002. AI now appears in 18% of all utility patent applications we receive, and in more than 50% of all the technologies that we examine at the USPTO.

This data reinforces that AI is important in innovation in all industries, and from all regions of the country. And we know there are a lot of surrounding questions related to AI and inventorship.

AI has the potential to benefit our wellbeing in many ways, from revolutionizing the drug discovery and development process to helping address climate change. It also presents potential drawbacks, some of which we may not even yet recognize. Its important that we take a measured approach and hear your feedback on these important issues.

Thats why we need your input on how the U.S. government should address AI-enabled innovations while ensuring that our laws and policies continue to encourage and incentivize innovation without unduly locking up advances that can be readily discovered with the use of AI. Our takeaways will shape our future work on AI and ET policy at the USPTO and will help inform the broader U.S. governments approach to these critical technologies.

We have listening sessions coming up to learn about the impact of AI on the invention process and intellectual property on April 25 at the USPTO headquarters in Alexandria, VA, and on May 8 at Stanford University. You can attend both sessions either in-person or virtually, and well make recordings available afterwards. And, planning is already underway for our next AI and Emerging Technologies (ET) Partnership event later this summer. There, well focus on how we are responsibly using AI tools at the USPTO. You can find information about all these events on our AI and ET Partnership page of the USPTO website.

I hope you will join us at an upcoming listening session and encourage you to submit your feedback to the request for comments by May 15. We look forward to hearing from you!

Posted at 01:38PM Apr 18, 2023 in USPTO |

Original post:
Director's Blog: the latest from USPTO leadership - United States Patent and Trademark Office

Empowering Businesses to Harness the Power of Artificial Intelligence – Digital Journal

PRESS RELEASE

Published April 19, 2023

BridgeTheGap.ai, a leading provider of expert-driven services that help businesses unlock the power of artificial intelligence, announces its launch. The company is dedicated to closing the AI knowledge gap and enabling businesses to optimize efficiency, streamline operations, and boost revenue by providing practical resources and insights into real-life use cases and SOPs.

The transformative impact of AI on businesses cannot be overstated, but many companies struggle to leverage this cutting-edge technology due to the lack of AI expertise and knowledge. BridgeTheGap.ai aims to bridge this gap by providing businesses with the expertise and resources they need to succeed in the age of AI.

"At BridgeTheGap.ai, we believe that AI should be accessible to all businesses, regardless of size or industry. Our expert-driven services enable companies to harness the power of AI, optimize their operations, and unlock unprecedented growth potential," said Yury Byalik, the founder of BridgeTheGap.ai.

BridgeTheGap.ai offers a range of services, including AI training and education, AI strategy consulting, and AI implementation services. The company's team of AI experts has a wealth of experience in various industries, enabling them to provide practical insights and resources tailored to each client's unique needs.

With BridgeTheGap.ai, businesses can experience the transformative impact of AI on their operations and revenue. The company's services are designed to be accessible and affordable, making it easy for businesses of all sizes to take advantage of the benefits of AI.

About BridgeTheGap.ai:

BridgeTheGap.ai is dedicated to helping businesses unlock the power of artificial intelligence by closing the AI knowledge gap. BridgeTheGap.ais mission is to help companies and their employees understand how AI can be used to make their business more efficient and effective.

BridgeTheGap.ai provides resources and training to help team members learn about AI and how it can be applied in various industries.

BridgeTheGap.ai provides a wide range of use cases and examples to help businesses see how they can best leverage AI. Their team of experts has experience working with businesses across a range of industries, and they use this experience to provide practical, real-world examples of how AI can be used to achieve better results.

Media ContactCompany Name: BridgeTheGap.aiContact Person: Yury ByalikEmail: Send EmailCountry: United StatesWebsite: http://bridgethegap.ai/

Here is the original post:
Empowering Businesses to Harness the Power of Artificial Intelligence - Digital Journal

Artificial intelligence, possible recession driving record fraud rates … – Fox Business

Dr. Robert Marks discusses a Stanford survey that says 36% of researchers are concerned artificial intelligence could bring 'nuclear level catastrophe' on 'Kennedy.'

According to a new report, artificial intelligence (AI), a possible recession and a return to pre-pandemic activity are driving record fraud rates across the globe.

Pindrop, a global leader in voice technology, has released its annual Voice Intelligence & Security Report following an analysis of five billion calls and 3 billion fraud catches.

During an economic downturn, fraud is typically reported as a significant crime. The report claims historical data suggests that insurance claims and fraud will skyrocket in 2023.

ROMANCE SCAMS COST AMERICANS $1B IN 2022, A NEW RECORD

Photo illustration showing ChatGPT and OpenAI research laboratory logo and inscription at a mobile phone smartphone screen with a blurry background. Open AI is an app using artificial intelligence technology. (Nicolas Economou/NurPhoto via Getty Images / Getty Images)

With the pandemic winding down and economic conditions shifting, fraudsters have shifted focus away from government payouts and back to more traditional targets, such as contact centers.

But fraudsters are using new tactics to attack their old marks, including the use of personal user data acquired from the dark web, new AI models for synthetic audio generation and more. These factors have led to a 40% increase in fraud rates against contact centers in 2022 compared to the year prior.

The report found that fraudsters leveraging fast-learning AI models to create synthetic audio and content have already led to far-reaching consequences in the world of fraud. Although deepfakes and synthetic voices have existed for nearly 30 years, bad actors have made them more persuasive by pairing the tech with smart scripts and conversational speech.

Recently, Vice News used a synthetically generated voice with tools from ElevenLabs to utter a fixed passphrase "My Voice is My Password" and was able to bypass the voice authentication system at Lloyds Bank.

UNBRIDLED AI TECH RISKS SPREAD OF DISINFORMATION, REQUIRING POLICY MAKERS STEP IN WITH RULES: EXPERTS

Scammers will often resort to "phishing," which is a nefarious information gathering technique that uses fraud and trickery to fool people into handing over contact details, financial documents and payments. (iStock / iStock)

Arizona mother Jennifer DeStefano recounted a terrifying experience when phone scammers used AI technology to make her think her teenage daughter had been kidnapped.

The call came amidst a rise in "spoofing" schemes with fraudsters claiming that they have kidnapped loved ones to receive ransom money using voice cloning technology.

But, Pindrop says these technologies are not frequently used on the average citizen or consumer but are rather implemented in spearfishing schemes to attack high-profile targets, like CEOs and other C-suite executives.

For example, a bad actor or team of fraudsters could use a CEOs voice to ask another executive to wire millions of dollars for a fake offer to buy a company.

"It's actually the voice of the CEO, even in the case of the CEO having an accent, or even in the case that the CEO doesn't have public facing audio," Pindrop Co-Founder and CEO Vijay Balasubramanian told Fox News Digital.

This voice audio is typically derived from acquiring private recordings and internal all-hands messaging.

Pindrop notes that such tech could become more pervasive and help to inhibit other established fraud techniques.

CHATGPT FACING POTENTIAL DEFAMATION LAWSUIT AFTER FALSELY LABELING AUSTRALIAN MAYOR AS BRIBERY CONVICT

Verizon Business CEO Tami Erwin shares tips for protecting against cyber threats and encourages creating a security framework.

These include large-scale vishing/smishing efforts, victim social engineering, and (Interactive Voice Response) IVR reconnaissance. These tactics have caused permanent damage to brand reputations and forced consumer abandonment, according to Pindrop, resulting in the loss of billions of dollars.

Since 2020, these data breaches have affected over 300 million victims and data compromises are at an all-time high, with more than 1,800 events reported in 2021 and 2022 individually.

Furthermore, in 2021 and 2022, the number of reported data breaches reached an all-time high, with over 1,800 incidents yearly.

"It always starts with reconnaissance," Balasubramanian said.

IVR is the system companies use to guide users through their call center. For example, press one for billing information or press two for your balance. These systems have become more conversational because of AI.

CHATGPT BEING USED TO WRITE MALWARE, RANSOMWARE: REPORTS

A person receives a potential spam phone call on their cell phone. (iStock / iStock)

"They're taking a social security number that they have and they will go to every single bank and punch in that social security number. And the response of that system is one of two things. I don't recognize what that is, or hey, welcome thank you for being a valued customer. Your account balance is x," Balasubramanian said.

After acquiring all this account information, fraudsters target the accounts with the highest balances.

They then send a message saying there is a fraud charge with a convincing message, including information mined with bots from the IVR systems. The message then asks the account holder to divulge further information, such as a credit card number or CVV, which helps the fraudster finally access the account and remove funds.

Pindrop says companies need to detect voice liveness in sync with automatic speech recognition (ASR) and audio analytics to determine the speaker's environment and contextual audio to prevent synthetic voices, pitch manipulation, and replay attacks.

To prevent scams using synthetic voices, pitch manipulation and replay attacks, Pindrop says companies must also be capable of detecting voice liveness through automatic speech recognition (ASR) and audio analytics that determine the speaker's environment and contextual audio.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Unfortunately, research suggests that fraud rates in states that pose enhanced restrictions on the use of biometrics (such as California, Texas, Illinois, and Washington) are twice as likely to experience fraud. While these states enact such laws to protect consumer data, the legislation often makes no differentiation when it comes to company cybersecurity measures, which need voice analytics to adequately protect company and consumer data.

"If I target a consumer from those states, they most likely don't have advanced analytics performed on the voice, they are not looking for deep fakes. They are not checking if the voice is distorted," Balasubramanian said. "They aren't looking for any of that, so it's easy to steal money for those consumers."

Continue reading here:
Artificial intelligence, possible recession driving record fraud rates ... - Fox Business

The Artificial Intelligence Takeover Has Begun The Greyhound – The Greyhound

The following represents the opinion of the student reporter and does not represent the views of Loyola University Maryland, the Greyhound, or Loyola Universitys Department of Communication.

Large language models (LLMs) are here to stay. These are artificial intelligence systems that can generate natural, fluent and coherent text on any topic, given some input. They can also converse with humans, answer questions, write code and perform other tasks that require natural language understanding and generation.

Some of the most popular LLMs today are ChatGPT, Bard, and Bing. ChatGPT is developed by OpenAI, a research organization backed by tech luminaries like Elon Musk and Sam Altman. Bard is created by Google, based on its Language Model for Dialogue Applications (LaMDA). Bing is powered by Microsoft, using its own proprietary technology.

These LLMs have attracted millions of users who use them for various purposes, such as entertainment, education, productivity, and creativity. Some examples of how people use LLMs are:

Chatting with ChatGPT for fun, learning or companionship. ChatGPT can engage in casual conversations, tell jokes, stories and trivia, and even flirt with users. It can also adapt to different personalities and tones, depending on the users preferences.

Using Bard to generate ideas, summaries and content. Bard can help users with writing tasks, such as drafting emails, blog posts, presentations and essays. It can also provide suggestions, feedback and insights on various topics and domains.

Leveraging Bing to search for information, answers and solutions. Bing can not only provide relevant web results, but also generate natural language responses that explain the results or provide additional details. It can also solve problems, such as math equations, puzzles and quizzes.

The benefits of using LLMs are manifold. They can save time, enhance creativity, improve communication and expand knowledge. They can also provide entertainment, comfort and support. However, there are also some challenges and risks associated with LLMs, such as:

The quality and reliability of the generated text. LLMs are not always accurate or factual, as they rely on probabilistic methods and large amounts of data that may contain errors or biases. Users need to be aware of the limitations and uncertainties of LLMs, and verify the information they provide.

The ethical and social implications of the generated text. LLMs may produce text that is harmful, offensive or inappropriate, either intentionally or unintentionally. Users need to be responsible and respectful when using LLMs, and avoid generating or spreading text that may cause harm or offense to others.

The security and privacy of the user data. LLMs may collect and store user data, such as queries, responses and preferences, for improving their performance or providing personalized services. Users need to be aware of the data policies and practices of the LLM providers, and protect their personal information and identity.

LLMs are a powerful and promising technology that can transform the way we interact with information and each other. They offer many opportunities and benefits for users who want to explore new possibilities and enhance their capabilities. However, they also pose some challenges and risks that require caution and awareness from users who want to use them safely and ethically.

I would like to give Microsoft Bing a special thanks for writing all of that for me.

Here is the prompt I fed it to get that response: Write a NYT article about how LLMs like ChatGPT, Bard, and Bing are here to stay, and how people use them as a benefit. It took ten seconds to generate all of that text.

My Experience Using LLMs

Ive spent some time using all three platforms listed above. Both Microsoft Bing and ChatGPT are free to use to the public, but for Bard, you must join a waitlist. Of all three of them, ChatGPT-4 is the most advanced, followed by Bing and then Bard.

Bing is more informational and research based, while ChatGPT and Bard are more conversational based. You can feed them ludicrous tasks and it will fulfill them, as long as it isnt deemed offensive. The most creative thing I got ChatGPT to do was pretend to be Barack Obama giving a speech about the pandemic, but structured as if it was written like the King James version of the Bible. Needless to say, its pretty remarkable. The current version of OpenAIs platform is called ChatGPT-4. It only has access to information leading up to September 2021.

Microsoft Bing is much different, and I have been using it for a couple of weeks now. You can access it by using the Bing search engine, however, if you use Microsoft Edge as a browser, it has a dedicated button that you can press. Bing is separated into three options, the Chat feature, the Compose feature, and insights. I created the snippet above using the Compose feature, which allows you to do anything from paragraphs to emails, with different tones and lengths. However, the feature I find myself using the most is the Chat feature.

The Chat feature works differently than ChatGPT. First you select a tone of response: either creative, balanced, or precise. For the sake of my usage, I have been using precise. Afterward, you simply enter in whatever you want to know and it will scour the internet for responses and then generate a response based on sources that it pulls. The sources can be accessed either by clicking on the text, or by clicking the links at the bottom.

Bard by Google is like ChatGPT, except that it has access to the internet. However, it is by far the worst one. While it is able to generate responses faster than its competitors, the information is more often than not incorrect, and I have noticed that it has biases that can be considered problematic. Google says that Bard is experimental and a work in progress, and it is clear that this program certainly needs more work.

GPT in the Classroom

Based on the popularity, it is very clear that LLMs are here to stay. However, the quick rise in use has left educators scrambling to adapt. Some applications, like TurnItIn, are able to detect writing created by artificial intelligence. Yet, that may not dissuade some students from using it, so the question now becomes: Should LLMs like ChatGPT be outright banned, or should educators learn to adapt its usage into the classroom?

I believe that the latter is a much more practical answer for the classroom. ChatGPT and Bing should be treated as tools, just as search engines and databases are. As these programs are slowly being implemented into browsers and other services, to outright ban them would do more harm than good. But what is stopping students from using it to blatantly cheat? Trevor Oberlander 24 has a pessimistic view on this.

College degrees are worthless now because of ChatGPT, he said. If everyone is cheating, learning in class becomes redundant.

Oberlander, an economics major, says that GPAs can no longer be considered a unit of measure for school, since it is impossible to tell if legitimate work was done to receive the number. The amount of effort between students is incomparable if it is impossible to tell who actually put in effort to do work.

Lily Tiger 24 is also skeptical about LLMs. As an English major, she is very worried about the future of education and job security with her degree.

I went to a career fair and I asked if anyone has any positions for experience with research and writing skills, she said. If that is what ChatGPT can do then it makes me feel threatened.

Tiger is also considering a career path in high school education, so the prospect of a readily available and free answer machine makes her nervous for schooling in the future. She also believes that the education system needs to adapt or somehow integrate LLMs into the classroom.

We cant see it as a fear, because its already here. If we refuse to talk about it, that isnt a good idea. Teachers need to learn how to use this with their students as a resource, Tiger said.

Whats in store for the future?

While ChatGPT is currently a novel technology, in the five month timeframe between the release of GPT-3 and GPT-4, the abilities of the artificial intelligence have grown exponentially. In a recent interview on the Lex Fridman Podcast, Open AIs CEO, Sam Altman, stated that ChatGPT, will make a lot of jobs just go away.

I came across a TikTok recently that showed how GPT was integrated into a software program that creates over 100 professional grade headshots, based on a photo that the user submits. This is all done in mere hours. That right there is an industry killer, and the professional photography industry is not the only one affected so far. OpenAI recently announced that Shopify users will now have the ability to integrate a GPT-powered customer support representative on their online stores. Now, the customer support industry has been flattened as well.

ChatGPT is slowly creeping into every facet of our daily lives. Spotify now has a ChatGPT powered DJ that mimics a human DJ, and plays music tailored to your taste. Quizlet now has a GPT study partner, which asks you surprisingly in depth questions about flashcards that the user has provided. Even Snapchat has released a premium feature called My AI, which is a virtual friend that users can communicate within their Snapchat+ subscription. CNET, a company which writes news about technology and consumer electronics, revealed earlier this year that they have used artificial intelligence to write dozens of their articles.

Needless to say this phenomenon is not going away anytime soon. In fact, I would say that it is here to stay. So what is the solution for the AI takeover? Unfortunately, I do not think that there is an answer that would satisfy everyone. On one hand, LLMs are quickly proving to be an excellent resource for many. However, concerns about the ethics of using it are very real. And with Sam Altman outright saying that entire industries will be replaced by AI, the cause for concern is warranted.

A petition is currently circulating online which has been signed by the likes of Elon Musk and Apple co-founder Steve Wozniak. The petition is calling for the halt of AI development past ChatGPT-4 for at least six months. The main tenet of this petition is that Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

The petition goes on to state that OpenAI recently announced that At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of computers used for creating new models. The signers believe that now is the time for them to do this. They are worried about the competitiveness of AI and the idea that we are quickly entering a stage in technological evolution, where AI could become human-like.

Im not one to toot the horn of someone like Elon Musk, however, we are clearly at a pivotal moment in the history of technology. It is incredible what ChatGPT and other LLMs can do, however, I feel that we as a society should tread lightly down the artificial intelligence road. Science fiction is full of stories regarding artificial intelligence takeover. While I do believe we are leagues away from that happening, we should still err on the side of caution.

Continued here:
The Artificial Intelligence Takeover Has Begun The Greyhound - The Greyhound

University World News: Artificial Intelligence Tools Offer … – Ole Miss News

ChatGPT in evaluation An opportunity for greater creativity?

By Natalie Simon

As debate rages over the possibilities and risks to higher education of artificial intelligence tools such as ChatGPT, evaluators are also asking what role AI and machine learning can play in their field.

Speaking at a virtualsymposiumhosted by the Centre for Research Evaluation at the University of Mississippi in the United States on March 24, independent evaluation consultant Silva Ferretti described ChatGPT as the perfect bureaucrat: pedantic and by the book.

The symposium was titled Are We at a Fork in the Road? and explored implications and opportunities for AI in evaluation. It was hosted by Dr. Sarah Mason of the University of Mississippi and Dr. Bianca Montrosse-Moorhead of the University of Connecticut, co-editors ofNew Directions for Evaluation, a publication of the American Evaluation Association.

They said that disciplines around the world were grappling with the question of whether ChatGPT heralded a fork in the road with respect to powerful new generative AI. This potential fork emerges because generative AI is distinct from earlier AI models in that it can create entirely new content.

Read the complete report here.

Read more:
University World News: Artificial Intelligence Tools Offer ... - Ole Miss News