Archive for April, 2021

Heres why UF is going to use artificial intelligence across its entire curriculum | Column – Tampa Bay Times

Henry Ford did not invent the automobile. That was Karl Benz.

But Ford did perfect the assembly line for auto production. That innovation directly led to cars becoming markedly cheaper, putting them within reach of millions of Americans.

In effect, Ford democratized the automobile, and I see a direct analogy to what the University of Florida is doing for artificial intelligence AI, for short.

In July, the University of Florida announced a $100 million public-private partnership with NVIDIA the maker of graphics processing units used in computers that will catapult UFs research strength to address some of the worlds most formidable challenges, create unprecedented access to AI training and tools for under-represented communities and build momentum for transforming the future of the workforce.

At the heart of this effort is HiPerGator AI the most powerful AI supercomputer in higher education. The supercomputer, as well as related tools, training and other resources, is made possible by a donation from UF alumnus Chris Malachowsky as well as from NVIDIA, the Silicon Valley-based technology company he co-founded and a world leader in AI and accelerated computing. State support also plays a critical role, particularly as UF looks to add 100 AI-focused faculty members to the 500 new faculty recently added across the university many of whom will weave AI into their teaching and research.

UF will likely be the nations first comprehensive research institution to integrate AI across the curriculum and make it a ubiquitous part of its academic enterprise. It will offer certificates and degree programs in AI and data science, with curriculum modules for specific technical and industry-focused domains. The result? Thousands of students per year will graduate with AI skills, growing the AI-trained workforce in Florida and serving as a national model for institutions across the country. Ultimately, UFs effort will help to address the important national problem of how to train the nations 21st-century workforce at scale.

Further, due to the unparalleled capabilities of our new machine, researchers will now have the tools to solve applied problems previously out of reach. Already, researchers are eyeing how to identify at-risk students even if they are learning remotely, how to bend the medical cost curve to a sustainable level, and how to solve the problems facing Floridas coastal communities and fresh water supply.

Additionally, UF recently announced it would make its supercomputer available to the entire State University System for educational and research purposes, further bolstering research and workforce training opportunities and positioning Florida to be a national leader in a field revolutionizing the way we all work and live. Soon, we plan to offer access to the machine even more broadly, boosting the national competitiveness of the United States by partnering with educational institutions and private industry around the country.

Innovation, access, economic impact, world-changing technological advancement UFs AI initiative provides all these things and more.

If Henry Ford were alive today, I believe he would recognize the importance of whats happening at UF. And while he did not graduate from college, I believe he would be proud to see it happening at an American public university.

Joe Glover is provost and senior vice president of academic affairs at the University of Florida.

See the original post:
Heres why UF is going to use artificial intelligence across its entire curriculum | Column - Tampa Bay Times

Study Finds Both Opportunities and Challenges for the Use of Artificial Intelligence in Border Management Homeland Security Today – HSToday

Frontex, the European Border and Coast Guard Agency, commissioned RAND Europe to carry out an Artificial intelligence (AI) research study to provide an overview of the main opportunities, challenges and requirements for the adoption of AI-based capabilities in border management.

AI offers several opportunities to the European Border and Coast Guard, including increased efficiency and improving the ability of border security agencies to adapt to a fast-paced geopolitical and security environment. However, various technological and non-technological barriers might influence how AI materializes in the performance of border security functions.

Some of the analyzed technologies included automated border control, object recognition to detect suspicious vehicles or cargo and the use of geospatial data analytics for operational awareness and threat detection.

The findings from the study have now been made public, and Frontex aims to use the data gleaned to shape the future landscape of AI-based capabilities for Integrated Border Management, including AI-related research and innovation projects.

The study identified a wide range of current and potential future uses of AI in relation to five key border security functions, namely: situation awareness and assessment; information management; communication; detection, identification and authentication; and training and exercise.

According to the report, AI is generally believed to bring at least an incremental improvement to the existing ways in which border security functions are conducted. This includes front-end capabilities that end users directly utilize, such as surveillance systems, as well as back-end capabilities that enable border security functions, like automated machine learning.

Potential barriers to AI adoption include knowledge and skills gaps, organizational and cultural issues, and a current lack of conclusive evidence from actual real-life scenarios.

Read the full report at Frontex

(Visited 174 times, 3 visits today)

See the article here:
Study Finds Both Opportunities and Challenges for the Use of Artificial Intelligence in Border Management Homeland Security Today - HSToday

How To Patent An Artificial Intelligence (AI) Invention: Guidance From The US Patent Office (USPTO) – Intellectual Property – United States – Mondaq…

PatentNext Summary: AI-related inventionshave experienced explosive growth. In view of this, the USPTO hasprovided guidance in the form of an example claim and an"informative" PTAB decision directed to AI-related claimsthat practitioners can use to aid in preparing robust patent claimson AI-related inventions.

Artificial Intelligence (AI) has experienced explosive growthacross various industries. From Apple's Face ID (facerecognition), Amazon's Alexa (voice recognition), to GM Cruise(autonomous vehicles), AI continues to shape the modern world.SeeArtificialIntelligence.

It comes as no surprise, therefore, that patents related toAI inventions have also experienced explosivegrowth.

Indeed, in the last quarter of 2020, the United States Patentand Trademark Office (USPTO) reported that patent filings forArtificial Intelligence (AI) related inventions more than doubledfrom 2002 to 2018.SeeOffice of the ChiefEconomist, Inventing AI: Tracking The Diffusion Of ArtificialIntelligence With Patents, IP DATA HIGHLIGHTS No. 5 (Oct.2020).

During the same period, however, the U.S. Supreme Court'sdecision inAlice Corp. v. CLS BankInternationalcast doubt on the patentability ofsoftware-related inventions, which AI sits squarelywithin.

Fortunately, since the SupremeCourt'sAlice decision, the Federal Circuitclarified (on numerous occasions) that software-related patents areindeed patent-eligible. SeeAre Software InventionsPatentable?

More recently, in 2019, the United States Patent and TrademarkOffice (USPTO) provided its own guidance on the topic of patentingAI inventions. See2019 Revised Patent Subject Matter EligibilityGuidance. Below we explore these examples.

As part of its 2019 Revised Patent Subject Matter EligibilityGuidance (the "2019 PEG"), the USPTO provided severalexample patent claims and respective analyses under thetwo-partAlicetest.SeeSubjectMatter Eligibility Examples: Abstract Ideas.

One of these examples ("Example 39") demonstrated apatent-eligible artificial intelligence invention. In particular,Example 39 provides an example AI hypothetic invention labeled"Method for Training a Neural Network for FacialDetection" and describes an invention for addressing issues ofolder facial recognition methods that suffered from the inabilityto robustly detect human faces in images where there are shifts,distortions, and variations in scale in scale and rotation of theface pattern in the image.

The example inventive method recites claim elements fortraininga neural networkacross twostages of training set data so as to minimize false positives forfacial detection. The claims are reproduced below:

collecting a set of digitalfacial images from a database;

applying one or moretransformations to each digital facial image includingmirroring, rotating, smoothing, or contrast reduction to create amodified set of digital facial images;

creating a first trainingset comprising the collected set of digital facial images, themodified set of digital facial images, and a set of digitalnon-facial images;

training the neural networkin a first stage using the first training set

creating a second trainingset for a second stage of training comprising the first trainingset and digital non-facial images that are incorrectly detected asfacial images after the first stage of training;and

training the neural networkin a second stage using the second training set.

The USPTO's analysis of Example 39 informs that the aboveclaim is patent-eligible (and not "directed to" anabstract idea) because the AI-specific claim elements do not recitea mere "abstract idea." SeeHow to Patent Software Inventions: Show an"Improvement". In particular, while some ofthe claim elements may be based on mathematical concepts, suchconcepts are not recited in the claim. Further, the claim does notrecite a mental process because the steps are not practicallyperformed in the human mind. Finally, the claim does not recite anymethod of organizing human activity, such as a fundamental economicconcept or meaning interactions between people. Because the claimsdo not fall into one of these three categories, then, according tothe USPTO, then the claim is patent-eligible.

As a further example, the Patent Trial and Appeal Board (PTAB)more recently applied the 2019 PEG (as revised) inanexparteappeal involving anartificial intelligence invention.Seeex parte Hannun (formerly Ex parteLinden), 2018-003323 (April 1,2019)(designated by the PTAB as an"Informative" decision).

InHannun, the patent-at-issuerelated to "systems and methods for improving thetranscription of speech into text." The claims includedseveral AI-related elements, including "a set of trainingsamples used to traina trained neural networkmodel" as used to interpret a string of charactersfor speech translation. Claim 11 of the patent-at-issue isillustrative and is reproduced below:

receiving an inputaudio from a user; normalizing the input audio to make a totalpower of the input audio consistent with a set of training samplesused to train a trained neural networkmodel;

generatinga jitter set of audio files from the normalized input audio bytranslating the normalized input audio by one or more timevalues;

for eachaudio file from the jitter set of audio files, which includes thenormalized input audio:

generatinga set of spectrogram frames for each audio file; inputting theaudio file along with a context of spectrogram frames into atrained neural network; obtaining predicted character probabilitiesoutputs from the trained neural network;and

decoding atranscription of the input audio using the predicted characterprobabilities outputs from the trained neural network constrainedby a language model that interprets a string of characters from thepredicted character probabilities outputs as a word orwords.

Applying the two-partAlicetest, theExaminer had rejected the claims finding them patent-ineligible asmerely abstract ideas (i.e., mathematical concepts and certainmethods of organizing human activity without significantlymore.)

The PTAB disagreed. While the PTAB generally agreed that thepatentspecificationincluded mathematicalformulas, such mathematical formulas were"notrecited in theclaims." (original emphasis).

Nor did the claims recite "organizing human activity,"at least because, according to the PTAB, the claims were directedto a specific implementation comprising technical elementsincluding AI and computer speech recognition.

Finally, and importantly, the PTAB noted the importance ofthespecificationdescribing how the claimedinvention provides animprovementto thetechnical field of speech recognition, with the PTAB specificallynoting that "the Specification describes thatusingDeepSpeech learning,i.e.,a trained neural network, along with alanguage model 'achieves higher performance than traditionalmethods on hard speech recognition tasks while also being muchsimpler.'"

For each of these reasons, the PTAB found the claims of thepatent-at-issue inHannunto bepatent-eligible.

Each of Example 39 and the PTAB's informative decisionofHannundemonstrates theimportance of properly drafting AI-related claims (and, in general,software-related claims) to follow a three-part pattern ofdescribing an improvement to the underlying computing invention,describe how the improvement overcomes problems experienced in theprior art, and recite the improvement in the claims. For moreinformation, seeHow to Patent Software Inventions: Show an"Improvement".

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Read the rest here:
How To Patent An Artificial Intelligence (AI) Invention: Guidance From The US Patent Office (USPTO) - Intellectual Property - United States - Mondaq...

System on Chips And The Modern Day Motherboards – Analytics India Magazine

The SoC is the new motherboard.

Data centres are no longer betting on the one-size-fits-all compute. Decades of homogenous compute strategies are disrupted by the need to optimise. Modern-day data centres are embracing purpose-built System on Chip (SoC) designs to have more control over peak performance, optimise power consumption and scalability. Thus, customisation of chips has become the go-to solution for many cloud providers. Companies like Google Cloud especially are doubling down on this front.

Google introduced the Tensor Processing Unit (TPU) back in 2015. Today TPUs power services such as real-time voice search, photo object recognition, and interactive language translation. TPUs drive DeepMinds powerful AlphaGo algorithms, which outclassed the worlds best Go player. They were later used for Chess and Shogi. Today, TPUs have the power to process over 100 million photos a day. Most importantly, TPUs are also used for Googles search results. The search giant even unveiled OpenTitan, the first open-source silicon root-of-trust project. The companys custom hardware solutions range from SSDs, to hard drives, network switches, and network interface cardsoften in deep collaboration with external partners.

Workloads demand even deeper integration into the underlying hardware.

Just like on a motherboard, CPUs and TPUs come from different sources. A Google data centre consists of thousands of server machines connected to a local network. Google designs custom chips, including a hardware security chip currently being deployed on both servers and peripherals. According to Google Cloud, these chips allow them to securely identify and authenticate legitimate Google devices at the hardware level.

According to the team at GCP, computing at Google is at a critical inflection point. Instead of integrating components on a motherboard, Google focuses more on SoC designs where multiple functions sit on the same chip or on multiple chips inside one package. The company even claimed that the System on Chips is the modern-day motherboard.

To date, writes Amin Vahdat of GCP, the motherboard has been the integration point, where CPUs, networking, storage devices, custom accelerators, memory, all from different vendors blended into an optimised system. However, the cloud providers, especially companies like Google Cloud, AWS which own large data centres, gravitate towards deeper integration in the underlying hardware to gain higher performance at lesser power consumption.

According to ARM acquired by NVIDIA recently renewed interest towards design freedom and system optimisation has led to higher compute utilisation, improved performance-power ratios, and the ability to get more out of a physical datacenter.

For example, AWS Graviton2 instances, using the Arm Neoverse N1 platform, deliver up to 40 percent better price-performance over the previous x86-based instances at a 20 percent lower price. Silicon solutions such as Amperes Altra are designed to deliver performance-per-watt, flexibility, and scalability their customers demand.

The capabilities of cloud instances rely on the underlying architectures and microarchitectures that power the hardware.

Amazon has made its silicon ambitions obvious as early as 2015. Amazon acquired Israel-based Annapurna Labs, known for networking-focused Arm SoCs. Amazon leveraged Annapurna Labs tech to build a custom Arm server-grade chipGraviton2. After its release, Graviton2 locked horns with Intel and AMD, the data centre chip industrys major players. While the Graviton2 instance offered 64 physical cores, AMD or Intel could manage only 32 physical cores.

Last year, AWS even launched custom-built AWS Inferentia chips for the hardware specialisation department. Inferentias performance convinced AWS to deploy them for their popular Alexa services that require state of the art ML for speech processing and other tasks.

Amazons popular EC2 instances are now powered by AWS Inferentia chips that can deliver up to 30% higher throughput and up to 45% lower cost per inference. Whereas, Amazon EC2 F1 instances use FPGAs to enable delivery of custom hardware accelerations. F1 instances are easy to program and come with an FPGA Developer AMI and support hardware level development on the cloud. Examples of target applications that can benefit from F1 instance acceleration include genomics, search/analytics, image and video processing, network security, electronic design automation (EDA), image and file compression and big data analytics.

Source:AWS

Followed by AWS Inferentias success in providing customers with high-performance ML inference at the lowest cost in the cloud, AWS is launching Trainium to address the shortcomings of Inferentia. The Trainium chip is specifically optimised for deep learning training workloads for applications including image classification, semantic search, translation, voice recognition, natural language processing and recommendation engines.

The above table is a performance comparison by Anandtech, which shows how the cloud providers can ditch the legacy chip makers, thanks to ARMs license provisions. Even Microsoft is reportedly building an ARM-based processor for Azure data centres. Apart from custom chips thats under wraps, Microsoft too had a shot at silicon success. They have collaborated with AMD, Intel, and Qualcomm Technologies and announced the Microsoft Pluton security processor. The Pluton design builds security directly into the CPU.

To overcome the challenges and realise the opportunities presented by semiconductor densities and capabilities, electronic product cloud companies will look into System-on-a-Chip (SoC) design methodologies of incorporating pre-designed components, also called SoC Intellectual Property (SoC-IP), which can then be integrated into their own algorithms. As SoCs incorporate processors that allow customisation in the layers of software as well as in the hardware around the processors is the reason why even Google Cloud is bullish on this. They even roped in Intel veteran Uri Frank to lead their server chip design efforts. According to Amin Vahdata, VP, GCP, SoCs offer many orders of magnitude better performance with greatly reduced power and cost compared to assembling individual ASICs on a motherboard. The future of cloud infrastructure is bright, and its changing fast, said Vahdat.

View post:
System on Chips And The Modern Day Motherboards - Analytics India Magazine

BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI – Business Day

A guide to an intellectual counter-revolution that is already transforming the world

BL PREMIUM

01 April 2021 - 05:10 John Thornhill

It may not be on the level of the Montagues and the Capulets, or the Sharks and the Jets, but in the world of geeks the rivalry is about as intense as it gets. For decades, two competing tribes of artificial intelligence (AI) experts have been furiously duelling with each other in research labs and conference halls around the world. But rather than swords or switchblades, they have wielded nothing more threatening than mathematical models and computer code.

On one side, the connectionist tribe believes that computers can learn behaviour in the same way as humans do, by processing a vast array of interconnected calculations. On the other, the symbolists argue that machines can only follow discrete rules. The machines instructions are contained in specific symbols, such as digits and letters...

The rest is here:
BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI - Business Day