Archive for the ‘Artificial Intelligence’ Category

Chinas lawmakers walk fine line between AI development and tighter regulation – South China Morning Post

We must establish a unified market for computing power services and the effective use of resources across the country, Yu, a CPPCC member, said.

Xi Jinpings hi-tech push steals the spotlight at Chinas two sessions

His appeal resonated with other delegates, including telecoms equipment maker ZTEs senior vice-president Miao Wei and Ma Kui, general manager at China Mobiles Sichuan branch, who both called for increased investment in and more coordinated development of computing infrastructure. Miao and Ma are NPC delegates.

Computing power has become the focus of international competition, said Ma, who also highlighted the imbalance of the Chinese AI industry, with research teams located mostly in first -tier cities such as Beijing and Shanghai but computing resources clustered in other smaller cities.

The calls for a state-orchestrated computing infrastructure come after five Chinese government bodies, including MIIT and the National Development and Reform Commission, the countrys top economic planner, issued a policy titled East-West Compute Transfer to coordinate computing resources between Chinas eastern and coastal provinces and its western inland regions.

But Zhang Yunquan, a CPPCC member and a research fellow from the Chinese Academy of Sciences, said the project would not help efforts to train large language (LLM) models for AI, as it mainly serves traditional data centre and cloud computing demands.

Instead, Zhang proposed state-led efforts to coordinate academic and industrial resources to build up a sovereign LLM.

Cao Peng, chair of the technology committee at Chinese e-commerce giant JD.com and head of its cloud unit, called for the development of home-made AI chips to circumvent Washingtons export controls.

Two sessions 2024: Chinas construction of particle collider may start in 2027

Liu Qingfeng, chairman at iFlyTek, a Chinese AI specialist known for its voice recognition capability, called for a national-level approach to systematically and rapidly propel our countrys artificial general intelligence growth.

We need to acknowledge the gap and consolidate resources from the state level to accelerate the catch-up [with US AI firms], according to Liu.

Zeng Yi, a CPPCC member and head of China Electronics Corporation, warned that China was lagging in generative AI when it came to talent and basic scientific research. We are all very anxious about being left behind, Zeng said.

Premier Li Qiang introduced an AI+ initiative to integrate the power of AI across traditional sectors to drive economic growth, and to push for technology upgrades. Meanwhile, Chinas lawmakers and political advisers voiced concern about potential disruptions from AI, and called for effective regulation.

Lou Xiangping, head of China Mobiles branch in the central Henan province, proposed an accountability system to hold service providers such as operators of local ChatGPT-like services responsible for possible mishaps.

China has already implemented a registration system that requires local LLMs to apply for approval before providing public services. More than 40, or around one-fifth of the countrys total number of LLMs, have been given the green light for public release.

Zhang Yi, a CPPCC member and senior partner at law firm King & Wood Mallesons, tabled his proposal about improving AI regulation but also cautioned that too many laws might hinder the development of the local industry.

In explaining his proposal to local media, Zhang said China needs to balance regulation and development through an approach that clearly defines what is illegal, while also allowing companies to innovate and explore new areas.

As global AI competition intensifies [we] need to be wary of how overbearing legal intervention could inhibit the healthy and orderly development of AI, he said.

Read the original here:
Chinas lawmakers walk fine line between AI development and tighter regulation - South China Morning Post

The benefits and risks of Artificial Intelligence – IT Brief Australia

In little more than 12 months, generative AI has evolved from being a technical novelty into a powerful business tool. However senior IT managers believe the technology brings with it risks as well as benefits.

According to the Immuta 2024 State of Data Security Report, 88% of senior managers say their staff are already using AI tools, regardless of whether their organisation has a firm policy of adoption.

Asked to nominate the key IT security benefits offered by AI, respondents to the Immuta survey pointed to improved phishing attack identification and threat simulation as two of the biggest. Others included anomaly detection and better audits and reporting.

When it came to identifying AI-related risks, inadvertent exposure of sensitive information by employees and unauthorised use of purpose-built models out of context were nominated by respondents. Additional named risks included the inadvertent exposure of sensitive data by large language models (LLOMs) and the poisoning of training data.

Continuing growth Despite these concerns, organisational uptake of AI appears likely to remain brisk. Analyst firm Gartner predicts that IT spending will increase more than 70% during the next year, and a significant portion will be invested in AI-related technologies and tools. Organisations will need to continue to embrace this new technology to remain competitive and relevant in todays economic landscape.

Its likely that 2024 will also become the year of the AI control system. Aside from the hype surrounding generative AI, there is a broader issue around developing a control system for the technology. This is because AI brings an entirely new paradigm where there is little or no human control. AI initiatives, therefore, wont get into full-scale production without a new form of control system in place.

At the same time, organisations will come to realise that, as AI usage increases, they need to focus even more attention on data security. As we have seen with governments around the world, there has also been an urgent need to enact news laws and regulations to ensure that data privacy and data security concerns with generative AI are addressed.

As the technology evolves, it will become clear that the key to harnessing the power of large-language model (LLM)-based AI lies in having a robust data governance framework. Such a framework is essential not only for guiding the ethical and secure use of LLMs but also for establishing standards for measuring their outputs and ensuring integrity.

The evolution of LLMs will open new avenues for applications in data analysis, customer service, and decision-making processes, further embedding LLMs into the fabric of data-driven industries.

The biggest winners when it comes to AI usage will be the organisations that create real value from better data engineering processes that are used to leverage models using their own data and business context. The key impact for these companies will be better knowledge management.

An ongoing reprioritisation and reassignment of resources With the pace of change in technology and data usage likely to continue to increase, organisations will be forced to redirect resources into new data-related areas that will become priorities. Examples include data governance and compliance, data quality, and data integration.

Despite ongoing pressure to do more with less, organisations cant and wont halt investment in IT. These investments will be focussed on the critical building blocks that form the foundation of a modern data stack that is required to support AI initiatives.

Also, the traditional demarcation between data and application layers in an IT infrastructure will be replaced by a more integrated approach focused on data products. Rather than a few dozen apps, there will be hundreds of data products. Dubbed a data-centric architecture, this approach will allow organisations to extract greater value from their data resources and better support their operations.

By working closer to the data, data teams can reduce latency and improve performance, opening up new possibilities for real-time reporting and analytics. This, in turn, supports better decision-making and more efficient business processes.

The coming year will see some fundamental changes in the way businesses manage and work with AI and data. Those that take time to experiment with the technology and determine its best use cases will be best placed to extract maximum value and achieve optimal results.

Go here to see the original:
The benefits and risks of Artificial Intelligence - IT Brief Australia

Learn the ways of machine learning with Python through one of these 5 courses and specializations – Fortune

The fastest growing jobs in the world right now are ones dealing with AI and machine learning. Thats according to the World Economic Forum.

This should come at no surprise as new technology is being deployed practically on the daily that is revolutionizing the ways in which the globe works through automation and machine intelligence.

ADVERTISEMENT

Beyond having foundational skills in mathematics and computer science and soft skills like problem-solving and communication, core to the AI and machine learning space is programmingspecifically Python. The programming language is one of the most in-demand for all tech experts.

Python plays an integral part of machine learning specialists everyday tasks, says Ratinder Paul Singh Ahuja, CTO and VP at Pure Storage. He specifically points its diverse set of libraries and their relevant roles:

As you can imagine, the best practices in the everchanging AI may differ depending on the day, task, and company. So, building foundational skills overalland being able to differentiate yourselfis important in the space.

The good news for those who are looking to learn the ropes in the machine learning and Python space, there are seemingly endless ways to gain knowledge onlineand even for free.

For those exploring the subject on your own, resources like W3Schools, Kaggle, and Googles crash course are good options. Even as simple as watching YouTube videos and checking out GitHub can be useful.

I think if you focus on core technical skills, and also the ability to differentiate, I think that theres still plenty of opportunity for AI enthusiasts to get into the market, says Rakesh Anigundi, Ryzen AI product lead at AMD.

Anigundi adds that because the field and job market is so complicated, even companies themselves are trying to figure out what are the most useful skills to build products and solve problems. So, doing anything you can to stay ahead of the game can be part of what helps propel your career.

For those looking for a little bit of a deeper dive into machine learning with Python, Fortune has listed some of the options on the market; theyre largely self-paced but vary slightly in terms of price and length.

Participants can watch hours of free videos about machine learning. At the end, each course has one learning multiple-choice question. Users are provided five different challenges to take on. The interactive projects include the creation of a book recommendation engine, neural network SMS text classifier, and cat and dog image classifier.

Cost: Free

Length: Self-paced; 36 lessons + 5 projects

Course examples: Tensorflow; Deep Learning Demystified

Hosted with edX, this introductory course allows students to learn about machine learning and AI straight from two of Harvards expert computer science professors. Participants are exposed to topics like algorithms, neutral networks, and natural language processing. Video transcripts are also notably available in nearly a dozen other languages. For those wanting to learn more, the course is part of Harvards computer science for artificial intelligence professional certificate program.

Cost: Free (certificate available for $299)

Length: 6 weeks (45 hours/week)

Course learning goals: Explore advanced data science; train models; examine result; recognize data bias

Data scientists from IBM guide students through machine learning algorithms, Python classifications techniques, and data regressions. Participants are recommended to have a working knowledge of Python, data analysis, and data visualization as well as high school-level mathematics.

Cost: $49/month

Length: 12 hours (approximately)

Module examples: Regression; Classification; Clustering

With nearly 100 hours of content, instructors from Stanford University and DeepLearning.ai, including renowned AI and edtech leader Andrew Ng, walk students through the foundations of machine learning. It also focuses on the applications of AI into the real world, especially Silicon Valley. Participants are recommended to have some basic coding experience with knowledge of high school-level mathematics.

Cost: $49/month

Length: 2 months (10 hours/week)

Course examples: Supervised Machine Learning: Regression and Classification; Advanced Learning Algorithms; Unsupervised Learning, Recommenders, Reinforcement Learning

A professor from the University of Michigans school of information and college of engineering teaches students the ins and outs of machine learning, with discussion of regressions, classifications, neural networks, and more. The course is for individuals with already some existing knowledge in the data and AI world. It is part of a larger specialization focused on data science methods and techniques.

Cost: $49/month

Length: 31 hours (approximately)

Course examples: Fundamentals of Machine Learning; Supervised Machine Learning; Evaluation

Check out all ofFortunesrankings of degree programs, and learn more about specificcareer paths.

Link:
Learn the ways of machine learning with Python through one of these 5 courses and specializations - Fortune

A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. – EdSurge

When Satya Nitta worked at IBM, he and a team of colleagues took on a bold assignment: Use the latest in artificial intelligence to build a new kind of personal digital tutor.

This was before ChatGPT existed, and fewer people were talking about the wonders of AI. But Nitta was working with what was perhaps the highest-profile AI system at the time, IBMs Watson. That AI tool had pulled off some big wins, including beating humans on the Jeopardy quiz show in 2011.

Nitta says he was optimistic that Watson could power a generalized tutor, but he knew the task would be extremely difficult. I remember telling IBM top brass that this is going to be a 25-year journey, he recently told EdSurge.

He says his team spent about five years trying, and along the way they helped build some small-scale attempts into learning products, such as a pilot chatbot assistant that was part of a Pearson online psychology courseware system in 2018.

But in the end, Nitta decided that even though the generative AI technology driving excitement these days brings new capabilities that will change education and other fields, the tech just isnt up to delivering on becoming a generalized personal tutor, and wont be for decades at least, if ever.

Well have flying cars before we will have AI tutors, he says. It is a deeply human process that AI is hopelessly incapable of meeting in a meaningful way. Its like being a therapist or like being a nurse.

Instead, he co-founded a new AI company, called Merlyn Mind, that is building other types of AI-powered tools for educators.

Meanwhile, plenty of companies and education leaders these days are hard at work chasing that dream of building AI tutors. Even a recent White House executive order seeks to help the cause.

Earlier this month, Sal Khan, leader of the nonprofit Khan Academy, told the New York Times: Were at the cusp of using A.I. for probably the biggest positive transformation that education has ever seen. And the way were going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.

Khan Academy has been one of the first organizations to use ChatGPT to try to develop such a tutor, which it calls Khanmigo, that is currently in a pilot phase in a series of schools.

Khans system does come with an off-putting warning, though, noting that it makes mistakes sometimes. The warning is necessary because all of the latest AI chatbots suffer from what are known as hallucinations the word used to describe situations when the chatbot simply fabricates details when it doesnt know the answer to a question asked by a user.

AI experts are busy trying to offset the hallucination problem, and one of the most promising approaches so far is to bring in a separate AI chatbot to check the results of a system like ChatGPT to see if it has likely made up details. Thats what researchers at Georgia Tech have been trying, for instance, hoping that their muti-chatbot system can get to the point where any false information is scrubbed from an answer before it is shown to a student. But its not yet clear that approach can get to a level of accuracy that educators will accept.

At this critical point in the development of new AI tools, though, its useful to ask whether a chatbot tutor is the right goal for developers to head toward. Or is there a better metaphor than tutor for what generative AI can do to help students and teachers?

Michael Feldstein spends a lot of time experimenting with chatbots these days. Hes a longtime edtech consultant and blogger, and in the past he wasnt shy about calling out what he saw as excessive hype by companies selling edtech tools.

In 2015, he famously criticized promises about what was then the latest in AI for education a tool from a company called Knewton. The CEO of Knewton, Jose Ferreira, said his product would be like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile. Which led Feldstein to respond that the CEO was selling snake oil because, Feldstein argued, the tool was nowhere near to living up to that promise. (The assets of Knewton were quietly sold off a few years later.)

So what does Feldstein think of the latest promises by AI experts that effective tutors could be on the near horizon?

ChatGPT is definitely not snake oil far from it, he tells EdSurge. It is also not a robot tutor in the sky that can semi-read your mind. It has new capabilities, and we need to think about what kinds of tutoring functions todays tech can deliver that would be useful to students.

He does think tutoring is a useful way to view what ChatGPT and other new chatbots can do, though. And he says that comes from personal experience.

Feldstein has a relative who is battling a brain hemorrhage, and so Feldstein has been turning to ChatGPT to give him personal lessons in understanding the medical condition and his loved-ones prognosis. As Feldstein gets updates from friends and family on Facebook, he says, he asks questions in an ongoing thread in ChatGPT to try to better understand whats happening.

When I ask it in the right way, it can give me the right amount of detail about, What do we know today about her chances of being OK again? Feldstein says. Its not the same as talking to a doctor, but it has tutored me in meaningful ways about a serious subject and helped me become more educated on my relatives condition.

While Feldstein says he would call that a tutor, he argues that its still important that companies not oversell their AI tools. Weve done a disservice to say theyre these all-knowing boxes, or they will be in a few months, he says. Theyre tools. Theyre strange tools. They misbehave in strange ways as do people.

He points out that even human tutors can make mistakes, but most students have a sense of what theyre getting into when they make an appointment with a human tutor.

When you go into a tutoring center in your college, they dont know everything. You dont know how trained they are. Theres a chance they may tell you something thats wrong. But you go in and get the help that you can.

Whatever you call these new AI tools, he says, it will be useful to have an always-on helper that you can ask questions to, even if their results are just a starting point for more learning.

What are new ways that generative AI tools can be used in education, if tutoring ends up not being the right fit?

To Nitta, the stronger role is to serve as an assistant to experts rather than a replacement for an expert tutor. In other words, instead of replacing, say, a therapist, he imagines that chatbots can help a human therapist summarize and organize notes from a session with a patient.

Thats a very helpful tool rather than an AI pretending to be a therapist, he says. Even though that may be seen as boring, by some, he argues that the technologys superpower is to automate things that humans dont like to do.

In the educational context, his company is building AI tools designed to help teachers, or to help human tutors, do their jobs better. To that end, Merlyn Mind has taken the unusual step of building its own so-called large language model from scratch designed for education.

Even then, he argues that the best results come when the model is tuned to support specific education domains, by being trained with vetted datasets rather than relying on ChatGPT and other mainstream tools that draw from vast amounts of information from the internet.

What does a human tutor do well? They know the student, and they provide human motivation, he adds. Were all about the AI augmenting the tutor.

Go here to see the original:
A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. - EdSurge

Demystifying AI: The Probability Theory Behind LLMs Like OpenAI’s ChatGPT – PYMNTS.com

When a paradigm shift occurs, it is not always obvious to those affected by it.

But there is no eye of the storm equivalent when it comes to generative artificial intelligence (AI).

The technology ishere. There are already variouscommercial productsavailable fordeployment, and organizations that can effectively leverage it in support of theirbusiness goalsare likely to outperform their peers that fail to adopt the innovation.

Still, as with many innovations, uncertainty and institutional inertia reign supreme which is why understanding how the large language models (LLMs) powering AI work is critical to not just piercing the black box of the technologys supposed inscrutability, but also to applying AI tools correctly within an enterprise setting.

The most important thing to understand about the foundational models powering todays AI interfaces and giving them their ability to generate responses is the simple fact that LLMs, like Googles Bard, Anthropics Claude, OpenAIs ChatGPT and others, are just adding one word at a time.

Underneath the layers of sophisticated algorithmic calculations, thats all there is to it.

Thats because at a fundamental level, generative AI models are built to generate reasonable continuations of text by drawing from a ranked list of words, each given different weighted probabilities based on the data set the model was trained on.

Read more:There Are a Lot of Generative AI Acronyms Heres What They All Mean

While news of AI that can surpass human intelligence are helping fuel the hype of the technology, the reality is far more driven by math than it is by myth.

It is important for everyone to understand that AIlearns from data at the end of the day [AI] is merely probabilistics and statistics, Akli Adjaoute, AI pioneer and founder and general partner at venture capital fund Exponion, told PYMNTS in November.

But where do the probabilities that determine an AI systems output originate from?

The answer lies within the AI models training data. Peeking into the inner workings of an AI model reveals that it is not only the next reasonable word that is being identified, weighted, then generated, but that this process occurs on a letter by letter basis, as AI models break apart words into more manageable tokens.

That is a big part of whyprompt engineering for AI models is an emerging skillset. After all, different prompts produce different outputs based on the probabilities inherent to each reasonable continuation, meaning that to get the best output, you need to have a clear idea of where to point the provided input or query.

It also means that the data informing the weight given to each probabilistic outcome must berelevantto the query. The more relevant, the better.

See also:Tailoring AI Solutions by Industry Key to Scalability

While PYMNTS Intelligence has found that more than eight in 10 business leaders (84%) believe generative AI will positively impactthe workforce, generative AI systems are only as good as the data theyre trained on. Thats why the largest AI players are in an arms race toacquire the best training data sets.

Theres a long way to go before theres afuturistic version of AIwhere machines think and make decisions. Humans will be around for quite a while,Tony Wimmer, head of data and analytics atJ.P. Morgan Payments, told PYMNTS in March. And the more that we can write software that has payments data at the heart of it to help humans, the better payments will get.

Thats why, to train an AI model to perform to the necessary standard, many enterprises are relying ontheir own internal datato avoid compromising model outputs. By creating vertically specialized LLMs trained for industry use cases, organizations can deploy AI systems that are able to find the signal within the noise, as well as to be further fine-tuned to business-specific goals with real-time data.

AsAkli Adjaoutetold PYMNTS back in November, if you go into a field where the data is real, particularly in thepayments industry, whether its credit risk, whether its delinquency, whether its AML [anti-money laundering], whether its fraud prevention, anything that touches payments AI can bring a lot of benefit.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read the rest here:
Demystifying AI: The Probability Theory Behind LLMs Like OpenAI's ChatGPT - PYMNTS.com