Media Search:



Study Finds Both Opportunities and Challenges for the Use of Artificial Intelligence in Border Management Homeland Security Today – HSToday

Frontex, the European Border and Coast Guard Agency, commissioned RAND Europe to carry out an Artificial intelligence (AI) research study to provide an overview of the main opportunities, challenges and requirements for the adoption of AI-based capabilities in border management.

AI offers several opportunities to the European Border and Coast Guard, including increased efficiency and improving the ability of border security agencies to adapt to a fast-paced geopolitical and security environment. However, various technological and non-technological barriers might influence how AI materializes in the performance of border security functions.

Some of the analyzed technologies included automated border control, object recognition to detect suspicious vehicles or cargo and the use of geospatial data analytics for operational awareness and threat detection.

The findings from the study have now been made public, and Frontex aims to use the data gleaned to shape the future landscape of AI-based capabilities for Integrated Border Management, including AI-related research and innovation projects.

The study identified a wide range of current and potential future uses of AI in relation to five key border security functions, namely: situation awareness and assessment; information management; communication; detection, identification and authentication; and training and exercise.

According to the report, AI is generally believed to bring at least an incremental improvement to the existing ways in which border security functions are conducted. This includes front-end capabilities that end users directly utilize, such as surveillance systems, as well as back-end capabilities that enable border security functions, like automated machine learning.

Potential barriers to AI adoption include knowledge and skills gaps, organizational and cultural issues, and a current lack of conclusive evidence from actual real-life scenarios.

Read the full report at Frontex

(Visited 174 times, 3 visits today)

See the article here:
Study Finds Both Opportunities and Challenges for the Use of Artificial Intelligence in Border Management Homeland Security Today - HSToday

How To Patent An Artificial Intelligence (AI) Invention: Guidance From The US Patent Office (USPTO) – Intellectual Property – United States – Mondaq…

PatentNext Summary: AI-related inventionshave experienced explosive growth. In view of this, the USPTO hasprovided guidance in the form of an example claim and an"informative" PTAB decision directed to AI-related claimsthat practitioners can use to aid in preparing robust patent claimson AI-related inventions.

Artificial Intelligence (AI) has experienced explosive growthacross various industries. From Apple's Face ID (facerecognition), Amazon's Alexa (voice recognition), to GM Cruise(autonomous vehicles), AI continues to shape the modern world.SeeArtificialIntelligence.

It comes as no surprise, therefore, that patents related toAI inventions have also experienced explosivegrowth.

Indeed, in the last quarter of 2020, the United States Patentand Trademark Office (USPTO) reported that patent filings forArtificial Intelligence (AI) related inventions more than doubledfrom 2002 to 2018.SeeOffice of the ChiefEconomist, Inventing AI: Tracking The Diffusion Of ArtificialIntelligence With Patents, IP DATA HIGHLIGHTS No. 5 (Oct.2020).

During the same period, however, the U.S. Supreme Court'sdecision inAlice Corp. v. CLS BankInternationalcast doubt on the patentability ofsoftware-related inventions, which AI sits squarelywithin.

Fortunately, since the SupremeCourt'sAlice decision, the Federal Circuitclarified (on numerous occasions) that software-related patents areindeed patent-eligible. SeeAre Software InventionsPatentable?

More recently, in 2019, the United States Patent and TrademarkOffice (USPTO) provided its own guidance on the topic of patentingAI inventions. See2019 Revised Patent Subject Matter EligibilityGuidance. Below we explore these examples.

As part of its 2019 Revised Patent Subject Matter EligibilityGuidance (the "2019 PEG"), the USPTO provided severalexample patent claims and respective analyses under thetwo-partAlicetest.SeeSubjectMatter Eligibility Examples: Abstract Ideas.

One of these examples ("Example 39") demonstrated apatent-eligible artificial intelligence invention. In particular,Example 39 provides an example AI hypothetic invention labeled"Method for Training a Neural Network for FacialDetection" and describes an invention for addressing issues ofolder facial recognition methods that suffered from the inabilityto robustly detect human faces in images where there are shifts,distortions, and variations in scale in scale and rotation of theface pattern in the image.

The example inventive method recites claim elements fortraininga neural networkacross twostages of training set data so as to minimize false positives forfacial detection. The claims are reproduced below:

collecting a set of digitalfacial images from a database;

applying one or moretransformations to each digital facial image includingmirroring, rotating, smoothing, or contrast reduction to create amodified set of digital facial images;

creating a first trainingset comprising the collected set of digital facial images, themodified set of digital facial images, and a set of digitalnon-facial images;

training the neural networkin a first stage using the first training set

creating a second trainingset for a second stage of training comprising the first trainingset and digital non-facial images that are incorrectly detected asfacial images after the first stage of training;and

training the neural networkin a second stage using the second training set.

The USPTO's analysis of Example 39 informs that the aboveclaim is patent-eligible (and not "directed to" anabstract idea) because the AI-specific claim elements do not recitea mere "abstract idea." SeeHow to Patent Software Inventions: Show an"Improvement". In particular, while some ofthe claim elements may be based on mathematical concepts, suchconcepts are not recited in the claim. Further, the claim does notrecite a mental process because the steps are not practicallyperformed in the human mind. Finally, the claim does not recite anymethod of organizing human activity, such as a fundamental economicconcept or meaning interactions between people. Because the claimsdo not fall into one of these three categories, then, according tothe USPTO, then the claim is patent-eligible.

As a further example, the Patent Trial and Appeal Board (PTAB)more recently applied the 2019 PEG (as revised) inanexparteappeal involving anartificial intelligence invention.Seeex parte Hannun (formerly Ex parteLinden), 2018-003323 (April 1,2019)(designated by the PTAB as an"Informative" decision).

InHannun, the patent-at-issuerelated to "systems and methods for improving thetranscription of speech into text." The claims includedseveral AI-related elements, including "a set of trainingsamples used to traina trained neural networkmodel" as used to interpret a string of charactersfor speech translation. Claim 11 of the patent-at-issue isillustrative and is reproduced below:

receiving an inputaudio from a user; normalizing the input audio to make a totalpower of the input audio consistent with a set of training samplesused to train a trained neural networkmodel;

generatinga jitter set of audio files from the normalized input audio bytranslating the normalized input audio by one or more timevalues;

for eachaudio file from the jitter set of audio files, which includes thenormalized input audio:

generatinga set of spectrogram frames for each audio file; inputting theaudio file along with a context of spectrogram frames into atrained neural network; obtaining predicted character probabilitiesoutputs from the trained neural network;and

decoding atranscription of the input audio using the predicted characterprobabilities outputs from the trained neural network constrainedby a language model that interprets a string of characters from thepredicted character probabilities outputs as a word orwords.

Applying the two-partAlicetest, theExaminer had rejected the claims finding them patent-ineligible asmerely abstract ideas (i.e., mathematical concepts and certainmethods of organizing human activity without significantlymore.)

The PTAB disagreed. While the PTAB generally agreed that thepatentspecificationincluded mathematicalformulas, such mathematical formulas were"notrecited in theclaims." (original emphasis).

Nor did the claims recite "organizing human activity,"at least because, according to the PTAB, the claims were directedto a specific implementation comprising technical elementsincluding AI and computer speech recognition.

Finally, and importantly, the PTAB noted the importance ofthespecificationdescribing how the claimedinvention provides animprovementto thetechnical field of speech recognition, with the PTAB specificallynoting that "the Specification describes thatusingDeepSpeech learning,i.e.,a trained neural network, along with alanguage model 'achieves higher performance than traditionalmethods on hard speech recognition tasks while also being muchsimpler.'"

For each of these reasons, the PTAB found the claims of thepatent-at-issue inHannunto bepatent-eligible.

Each of Example 39 and the PTAB's informative decisionofHannundemonstrates theimportance of properly drafting AI-related claims (and, in general,software-related claims) to follow a three-part pattern ofdescribing an improvement to the underlying computing invention,describe how the improvement overcomes problems experienced in theprior art, and recite the improvement in the claims. For moreinformation, seeHow to Patent Software Inventions: Show an"Improvement".

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Read the rest here:
How To Patent An Artificial Intelligence (AI) Invention: Guidance From The US Patent Office (USPTO) - Intellectual Property - United States - Mondaq...

System on Chips And The Modern Day Motherboards – Analytics India Magazine

The SoC is the new motherboard.

Data centres are no longer betting on the one-size-fits-all compute. Decades of homogenous compute strategies are disrupted by the need to optimise. Modern-day data centres are embracing purpose-built System on Chip (SoC) designs to have more control over peak performance, optimise power consumption and scalability. Thus, customisation of chips has become the go-to solution for many cloud providers. Companies like Google Cloud especially are doubling down on this front.

Google introduced the Tensor Processing Unit (TPU) back in 2015. Today TPUs power services such as real-time voice search, photo object recognition, and interactive language translation. TPUs drive DeepMinds powerful AlphaGo algorithms, which outclassed the worlds best Go player. They were later used for Chess and Shogi. Today, TPUs have the power to process over 100 million photos a day. Most importantly, TPUs are also used for Googles search results. The search giant even unveiled OpenTitan, the first open-source silicon root-of-trust project. The companys custom hardware solutions range from SSDs, to hard drives, network switches, and network interface cardsoften in deep collaboration with external partners.

Workloads demand even deeper integration into the underlying hardware.

Just like on a motherboard, CPUs and TPUs come from different sources. A Google data centre consists of thousands of server machines connected to a local network. Google designs custom chips, including a hardware security chip currently being deployed on both servers and peripherals. According to Google Cloud, these chips allow them to securely identify and authenticate legitimate Google devices at the hardware level.

According to the team at GCP, computing at Google is at a critical inflection point. Instead of integrating components on a motherboard, Google focuses more on SoC designs where multiple functions sit on the same chip or on multiple chips inside one package. The company even claimed that the System on Chips is the modern-day motherboard.

To date, writes Amin Vahdat of GCP, the motherboard has been the integration point, where CPUs, networking, storage devices, custom accelerators, memory, all from different vendors blended into an optimised system. However, the cloud providers, especially companies like Google Cloud, AWS which own large data centres, gravitate towards deeper integration in the underlying hardware to gain higher performance at lesser power consumption.

According to ARM acquired by NVIDIA recently renewed interest towards design freedom and system optimisation has led to higher compute utilisation, improved performance-power ratios, and the ability to get more out of a physical datacenter.

For example, AWS Graviton2 instances, using the Arm Neoverse N1 platform, deliver up to 40 percent better price-performance over the previous x86-based instances at a 20 percent lower price. Silicon solutions such as Amperes Altra are designed to deliver performance-per-watt, flexibility, and scalability their customers demand.

The capabilities of cloud instances rely on the underlying architectures and microarchitectures that power the hardware.

Amazon has made its silicon ambitions obvious as early as 2015. Amazon acquired Israel-based Annapurna Labs, known for networking-focused Arm SoCs. Amazon leveraged Annapurna Labs tech to build a custom Arm server-grade chipGraviton2. After its release, Graviton2 locked horns with Intel and AMD, the data centre chip industrys major players. While the Graviton2 instance offered 64 physical cores, AMD or Intel could manage only 32 physical cores.

Last year, AWS even launched custom-built AWS Inferentia chips for the hardware specialisation department. Inferentias performance convinced AWS to deploy them for their popular Alexa services that require state of the art ML for speech processing and other tasks.

Amazons popular EC2 instances are now powered by AWS Inferentia chips that can deliver up to 30% higher throughput and up to 45% lower cost per inference. Whereas, Amazon EC2 F1 instances use FPGAs to enable delivery of custom hardware accelerations. F1 instances are easy to program and come with an FPGA Developer AMI and support hardware level development on the cloud. Examples of target applications that can benefit from F1 instance acceleration include genomics, search/analytics, image and video processing, network security, electronic design automation (EDA), image and file compression and big data analytics.

Source:AWS

Followed by AWS Inferentias success in providing customers with high-performance ML inference at the lowest cost in the cloud, AWS is launching Trainium to address the shortcomings of Inferentia. The Trainium chip is specifically optimised for deep learning training workloads for applications including image classification, semantic search, translation, voice recognition, natural language processing and recommendation engines.

The above table is a performance comparison by Anandtech, which shows how the cloud providers can ditch the legacy chip makers, thanks to ARMs license provisions. Even Microsoft is reportedly building an ARM-based processor for Azure data centres. Apart from custom chips thats under wraps, Microsoft too had a shot at silicon success. They have collaborated with AMD, Intel, and Qualcomm Technologies and announced the Microsoft Pluton security processor. The Pluton design builds security directly into the CPU.

To overcome the challenges and realise the opportunities presented by semiconductor densities and capabilities, electronic product cloud companies will look into System-on-a-Chip (SoC) design methodologies of incorporating pre-designed components, also called SoC Intellectual Property (SoC-IP), which can then be integrated into their own algorithms. As SoCs incorporate processors that allow customisation in the layers of software as well as in the hardware around the processors is the reason why even Google Cloud is bullish on this. They even roped in Intel veteran Uri Frank to lead their server chip design efforts. According to Amin Vahdata, VP, GCP, SoCs offer many orders of magnitude better performance with greatly reduced power and cost compared to assembling individual ASICs on a motherboard. The future of cloud infrastructure is bright, and its changing fast, said Vahdat.

View post:
System on Chips And The Modern Day Motherboards - Analytics India Magazine

BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI – Business Day

A guide to an intellectual counter-revolution that is already transforming the world

BL PREMIUM

01 April 2021 - 05:10 John Thornhill

It may not be on the level of the Montagues and the Capulets, or the Sharks and the Jets, but in the world of geeks the rivalry is about as intense as it gets. For decades, two competing tribes of artificial intelligence (AI) experts have been furiously duelling with each other in research labs and conference halls around the world. But rather than swords or switchblades, they have wielded nothing more threatening than mathematical models and computer code.

On one side, the connectionist tribe believes that computers can learn behaviour in the same way as humans do, by processing a vast array of interconnected calculations. On the other, the symbolists argue that machines can only follow discrete rules. The machines instructions are contained in specific symbols, such as digits and letters...

The rest is here:
BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI - Business Day

Reinforcement learning: The next great AI tech moving from the lab to the real world – VentureBeat

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

Reinforcement learning (RL) is a powerful type of artificial intelligence technology that can be used to learn strategies to optimally control large, complex systems such as manufacturing plants, traffic control systems (road/train/aircraft), financial portfolios, robots, etc. It is currently transitioning from research labs to highly impactful, real world applications. For example, self-driving car companies like Wayveand Waymoare using reinforcement learning to develop the control systems for their cars.

AI systems that are typically used in industry perform pattern recognition to make a prediction. For instance, they may recognize patterns in images to detect faces (face detection), or recognize patterns in sales data to predict a change in demand (demand forecasting), and so on. Reinforcement learning methods, on the other hand, are used to make optimal decisions or take optimal actions in applications where there is a feedback loop. An example where both traditional AI methods and RL may be used, but for different purposes, will make the distinction clearer.

Say we are using AI to help operate a manufacturing plant. Pattern recognition may be used for quality assurance, where the AI system uses images and scans of the finished product to detect any imperfections or flaws. An RL system, on the other hand, would compute and execute the strategy for controlling the manufacturing process itself (by, for example, deciding which lines to run, controlling machines/robots, deciding which product to manufacture, and so on). The RL system will also try to ensure that the strategy is optimal in that it maximizes some metric of interest such as the output volume while maintaining a certain level of product quality. The problem of computing the optimal control strategy, which RL solves, is very difficult for some subtle reasons (often much more difficult than pattern recognition).

In computing the optimal strategy, or policy in RL parlance, the main challenge an RL learning algorithm faces is the so-called temporal credit assignment problem. That is, the impact of an action (e.g. run line 1 on Wednesday) in a given system state (e.g. current output level of machines, how busy each line is, etc.) on the overall performance (e.g. total output volume) is not known until after (potentially) a long time. To make matters worse, the overall performance also depends on all the actions that are taken subsequent to the action being evaluated. Together, this implies that, when a candidate policy is executed for evaluation, it is difficult to know which actions were the good ones and which were the bad ones in other words, it is very difficult to assign credit to the different actions appropriately. The large number of potential system states in these complex problems further exacerbates the situation via the dreaded curse of dimensionality. A good way to get an intuition for how an RL system solves all these problems at the same time is by looking at the recent spectacular successes they have had in the lab.

Many of the recent, prominent demonstrations of the power of RL come from applying them to board games and video games. The first RL system to impress the global AI community was able to learn to outplay humans in different Atari games when only given as input the images on screen and the scores received by playing the game. This was created in 2013 by London-based AI research lab Deepmind (now part of Alphabet Inc.). The same lab later created a series of RL systems (or agents), starting with the AlphaGo agent, which were able to defeat the top players in the world in the board game Go. These impressive feats, which occurred between 2015 and 2017, took the world by storm because Go is a very complex game, with millions of fans and players around the world, that requires intricate, long-term strategic thinking involving both the local and global board configurations.

Subsequently, Deepmind and the AI research lab OpenAI have released systems for playing the video games Starcraft and DOTA 2 that can defeat the top human players around the world. These games are challenging because they require strategic thinking, resource management, and control and coordination of multiple entities within the game.

All the agents mentioned above were trained by letting the RL algorithm play the games many many times (e.g. millions or more) and learning which policies work and which do not against different kinds of opponents and players. The large number of trials were possible because these were all games running on a computer. In determining the usefulness of various policies, the RL algorithm often employed a complex mix of ideas. These include hill climbing in policy space, playing against itself, running leagues internally amongst candidate policies or using policies used by humans as a starting point and properly balancing exploration of the policy space vs. exploiting the good policies found so far. Roughly speaking, the large number of trials enabled exploring many different game states that could plausibly be reached, while the complex evaluation methods enabled the AI system to determine which actions are useful in the long term, under plausible plays of the games, in these different states.

A key blocker in using these algorithms in the real world is that it is not possible to run millions of trials. Fortunately, a workaround immediately suggests itself: First, create a computer simulation of the application (a manufacturing plant simulation, or market simulation etc.), then learn the optimal policy in the simulation using RL algorithms, and finally adapt the learned optimal policy to the real world by running it a few times and tweaking some parameters. Famously, in a very compelling 2019 demo, OpenAI showed the effectiveness of this approach by training a robot arm to solve the Rubiks cube puzzle one-handed.

For this approach to work, your simulation has to represent the underlying problem with a high degree of accuracy. The problem youre trying to solve also has to be closed in a certain sense there cannot be arbitrary or unseen external effects that may impact the performance of the system. For example, the OpenAI solution would not work if the simulated robot arm was too different from the real robot arm or if there were attempts to knock the Rubiks cube out of the real robot arm (though it may naturally be or be explicitly trained to be robust to certain kinds of obstructions and interferences).

These limitations will sound acceptable to most people. However, in real applications it is tricky to properly circumscribe the competence of an RL system, and this can lead to unpleasant surprises. In our earlier manufacturing plant example, if a machine is replaced with one that is a lot faster or slower, it may change the plant dynamics enough that it becomes necessary to retrain the RL system. Again, this is not unreasonable for any automated controller, but stakeholders may have far loftier expectations from a system that is artificially intelligent, and such expectations will need to be managed.

Regardless, at this point in time, the future of reinforcement learning in the real world does seem very bright. There are many startups offering reinforcement learning products for controlling manufacturing robots (Covariant, Osaro, Luffy), managing production schedules (Instadeep), enterprise decision making (Secondmind), logistics (Dorabot), circuit design (Instadeep), controlling autonomous cars (Wayve, Waymo, Five AI), controlling drones (Amazon), running hedge funds (Piit.ai), and many other applications that are beyond the reach of pattern recognition based AI systems.

Each of the Big Tech companies has made heavy investments in RL research e.g. Google acquiring Deepmind for a reported 400 million (approx $525 million) in 2015. So it is reasonable to assume that RL is either already in use internally at these companies or is in the pipeline; but theyre keeping the details pretty quiet for competitive advantage reasons.

We should expect to see some hiccups as promising applications for RL falter, but it will likely claim its place as a technology to reckon with in the near future.

M M Hassan Mahmud is a Senior AI and Machine Learning Technologist at Digital Catapult, with a background in machine learning within academia and industry.

Original post:
Reinforcement learning: The next great AI tech moving from the lab to the real world - VentureBeat