Archive for the ‘Artificial Intelligence’ Category

Staying ahead of the artificial intelligence curve with help from MIT – MIT News

In August, the young artificial intelligence process automation company Intelenz, Inc. announced its first U.S. patent, an AI-enabled software-as-a-service application for automating repetitive activities, improving process execution, and reducing operating costs. For company co-founder Renzo Zagni, the patent is a powerful testament to the value of his MIT educational experience.

Over the course of his two-decade career at Oracle, Zagni worked his way from database administrator to vice president of Enterprise Applications-IT. After spending seven years in his final role, he was ready to take on a new challenge by starting his own company.

From employee to entrepreneur

Zagni launched Intelenz in 2017 with a goal of keeping his company on the cutting edge. Doing so required that he stay up to date on the latest machine learning knowledge and techniques. At first, that meant exploring new concepts on his own. But to get to the next level, he realized he needed a little more formal education. Thats when he turned to MIT.

When I discovered that I could take courses at MIT, I thought, What better place to learn about artificial intelligence and machine learning? he says. Access to MIT faculty was something that I simply couldnt pass up.

Zagnienrolled in MIT Professional Educations Professional Certificate Program in Machine Learning and Artificial Intelligence, traveling from California to Cambridge, Massachusetts, to attend accelerated courses on the MIT campus.

As he continued to build his startup, one key to demystifying machine learning came from MIT Professor Regina Barzilay, a Delta Electronics professor in the Department of Electrical Engineering and Computer Science and a member of MITs Computer Science and Artificial Intelligence Laboratory. Professor Barzilay used real-life examples in a way that helped us quickly understand very complex concepts behind machine learning and AI, Zagni says. And her passion and vision to use thepower of machine learning to help win the fight against cancer was commendable and inspired us all.

The insights Zagni gained from Barzilay and other machine learning/AI faculty members helped him shape Intelenz early products and continue to influence his companys product development today most recently, in his patented technology, the "Service Tickets Early Warning System. The technology is an important representation of Intelenz ability to develop AI models aimed at automating and improving business processes at the enterprise level.

We had a problem we wanted to solve and knew that artificial intelligence and machine learning could possibly address it. And MIT gave me the tools and the methodologies to translate these needs into a machine learning model that ended up becoming a patent, Zagni says.

Driving machine learning with innovation

As an entrepreneur looking to push the boundaries of information technology,Zagni wasnt content to simply use existing solutions; innovation became a key goal very early in the process.

For professionals like me who work in information technology, innovation and artificial intelligence go hand-in-hand, Zagni says.

While completing machine learning courses at MIT, Zagni simultaneously enrolled in MIT Professional Educations Professional Certificate Program in Innovation and Technology. Combining his new AI knowledge with the latest approaches in innovation was a game-changer.

During my first year with MIT, I was putting together the Intelenz team, hiring developers, and completing designs. What I learned in the innovation courses helped us a lot, Zagni says. For instance, Blake Kotellys Mastering Innovation and Design Thinking course made a huge difference in how we develop our solutions and engage our customers. And our customers love the design-thinking approach.

Looking forward

While his progress at Intelenz is exciting, Zagni is anything but done. As he continues to develop his organization and its AI-enabled offerings, hes looking ahead to additional opportunities for growth.

Were already looking for the next technology that is going to allow us to disrupt the market, Zagni says. Were hearing a lot about quantum computing and other technology innovations. Its very important for us to stay on top of them if we want to remain competitive.

He remains committed to lifelong learning, and says he will definitely be looking to future MIT courses and he recommends other professionals in his field do the same.

Being part of the MIT ecosystem has really put me ahead of the curve by providing access to the latest information, tools, and methodologies, Zagni says. And on top of that, the faculty are very helpful and truly want to see participants succeed.

Original post:
Staying ahead of the artificial intelligence curve with help from MIT - MIT News

Machine Learning and Artificial Intelligence to Revolutionize the World of Art and Creativity – Entrepreneur

November10, 20204 min read

Artificial intelligence is revolutionizing various industries, markets, and services. However, the creative industries and the art world have not yet been able to use the full potential of this technology. However, two Chilean entrepreneurs devised a platform to go further.

Using the latest technology, they allow creators, amateur filmmakers, visual artists, even the film and music industry to use artificial intelligence algorithms in their work. This is Runway , a platform that integrates machine learning and artificial intelligence to the world of art and creativity.

Founded at the end of 2018 by Cristbal Valenzuela, Alejandro Matamala and Anastasis Germanidis, Runway started as a thesis project that they developed at New York University (NYU) - where they met - while developing a postgraduate degree.

Its creators define the platform as part of the new generation of creative tools. If Photoshop and Adobe revolutionized the creative market a few decades ago, Runway is looking to do so for years to come.

In this case, the bet of this startup is that with their software in the cloud, they can develop "synthetic content", that is, automatically generate, modify, and edit audiovisual content with artificial intelligence algorithms.

Cristobal, Alejandro and Anastasis, founders of Runway. Courtesy photo

"We continue to create audiovisual content in the same way that we have done for decades and that makes the process unnecessarily slow, expensive and difficult. With AI algorithms anyone can create hyper-realistic animations in seconds and edit them automatically. Something that only Hollywood or large production companies and special effects have been able to do so far, "explains Valenzuela.

At the same time, Runway makes it possible to shorten development times, in addition to democratizing access to this technology for as many creators as possible. These technologies are radically changing the way we create content because algorithms are already capable of generating images, text, video and sound in an ultra-realistic way, explains Cristbal; to which Alejandro adds "if we put these tools in the hands of people who have never accessed them before, they will start to think of new ways of producing art, generating content and telling stories".

The impact of the platform started from when they launched a tweet asking how many would use a tool like the one they had in mind. In less than 48 hours, they already had responses from engineers from Facebook, Google, universities and even the media, indicating that they found the possibility of a creative tool to occupy artificial intelligence algorithms incredible. Immediately after this, they created the company and have not looked back.

The path they have already traveled has been fast. As a result of their work, they have already generated interest from different investment funds. In the same year that they created Runway, they completed a $ 2 million investment round with US funds specialized in technology research startups: Lux Capital, Amplify Partners Compound Ventures.

But also, on a practical level, they have already carried out important projects, such as a collaboration with New Balance for the design of a shoe; be the software with which the rock band YACHT created part of the audiovisual content of their latest album - being nominated for a Grammy Award; be working on the creation of short films generated by IA, and even already collaborating with visual artists and the seventh art.

Along with this, the response of cloud software has also come from the academic world, which has led them to close alliances with various universities in the United States, such as NYU, MIT and UCLA; while in Chile the software is already being used at the Universidad Adolfo Ibez, Universidad de las Amricas, and the Pontificia Universidad Catlica.

At the moment, each step that Runway takes is a path towards the future and that is precisely the bet that its founders are making, by developing residencies or internships in the company for artists and researchers, so that they can deepen the uses and applications of the technology they develop. This was a practice that they had implemented before the outbreak of the coronavirus pandemic and that they will resume in a few more weeks, from their offices located in Brooklyn in New York City.

Read the original post:
Machine Learning and Artificial Intelligence to Revolutionize the World of Art and Creativity - Entrepreneur

Artificial Intelligence in healthcare and clinical practice in the COVID era – Health Europa

Automation during the industrial revolution led to a profound change in working practices across the 18th and 19th centuries. Currently, we are in the midst of a fourth industrial revolution, with globalisation and automation affecting every aspect of our working lives and leisure activities. The COVID-19 pandemic has provided a further driver for change, altering how we work and interact with remote working and reduction of human interaction at the centre of global initiatives to try and reduce the spread of the virus.

Healthcare systems generate large quantities of complex datasets pertaining to patients. Artificial Intelligence (AI) can offer solutions to medical problems by attempting to replicate decisions that would otherwise require human intelligence. Specific algorithms can be created in order to make associations between data and predict future outcomes. The COVID-19 pandemic has accelerated change within healthcare systems and driven interest in automated algorithms capable of assisting hospitals in diagnostics, decision making and repetitive clerical tasks thus reducing the potential footfall of staff on site. Automation and intelligent algorithms that learn and improve with further iterative cycles require data and the ethics behind large personal data sets, the challenges of anonymisation remain in their infancy.

AI applications within healthcare can be broadly categorised into diagnosis, research, management and system analysis. Ultimately in the time of COVID, workforce adaptations have led healthcare providers to use their workforce wisely, reducing the staff requirements on site and moving towards remote working.

Healthcare records in most countries have moved from paper-based records and notes to digital media. The electronic health record (EHR) allows the capture of large quantities of data across patient groups. Future patient care aims to develop tailored treatments for patients in a cost-effective manner.

The use of large datasets requires an element of data curation. Data needs to be retrieved from multiple disparate data sources. Data needs to then be cleaned to remove anomalies and harmonised to ensure similar data sets are compared across patient records. An element of these processes needs human oversight to ensure the correct data is fed into the algorithms.

Administrative applications are resource-heavy and repetitive. These applications include workflow management such as uploading referral letters from primary care, setting up referral assessment services and booking patients to the correct service provider in secondary care. Robotic process automation (RPA) consists of computer programmes that obey rules for these manual, resource-heavy tasks.

Additionally, process management systems (PMS), while embedded within commercial businesses, are still in their infancy within a healthcare application. Patients are individuals and tailoring care to them necessitates being able to react to changing physiological parameters within the confines of the organisation. In order to standardise care, clinical pathways have been developed to manage patient care from referral, to ordering diagnostic tests and eventual treatment pathway. While standardisation across patient groups allows for automation of some of these processes (templates for referral, standard test orders), allowing a more tailored approach which is patient-specific is the ultimate goal; thus generating a conflict to resolve between standardisation and a tailored patient approach specific to their individual needs.

Flexibility within a process management system requires technological skills to allow tasks to be postponed or reorganised. However, healthcare professionals lack the technical skillset to implement this, requiring a user-friendly interface. Further work in user interface and user experience (UI/UX) is therefore required to ensure a system that allows flexibility without losing the advantages of rapid automation and processing of large numbers of patients within the pathway management systems.

Increasingly call centre staff for websites have been replaced by chatbots that use natural language processing (NLP) to provide callers with information and manage queries. A chatbot is a type of AI programme that can conduct an intelligent conversation via text or auditory methods. It is predicted that by 2025 the global chatbot market will be worth $1.23bn. For hospitals dealing with upwards of 10,000 patient appointments per week, the use of chatbots to handle patient queries regarding appointment queries is still in its infancy when compared to more established sectors such as banking or commerce.

The current COVID-19 pandemic has highlighted the need for rapid screening and testing of patients to improve treatment pathways and also reduce the risk of cross infection. Clinical testing requires taking biological samples from patients which can be resource heavy and incorporates a time lag before results are available from real-time polymerase chain reaction testing (RT-PCR). The use of AI within this environment accessing electronic health records (EHR) of routinely ordered tests and vital signs can produce an effective tool to screen patients in emergency departments and hospital admission units.

Predictive analytics utilise AI algorithms to analyse healthcare data from EHRs to predict future outcomes thus aiming to improve outcomes and the patient experience as well as reducing costs. Data collected from EHRs can be supplemented with data from wearable technology and medical devices. Risk prediction models utilising AI would improve with successive data collection cycles aiming to supplement decision making by clinicians. Applications include management of chronic diseases such as chronic renal failure, diabetes and cardiovascular disease. Specific patient populations can vary across geographical healthcare providers and the ability of a predictive model to learn from its local population provides an advantage over established static modelling. Scalability across healthcare providers can therefore be challenging due to differences in socio-economic factors and populations based on geographical location. The ethical implications in terms of health insurance and risk stratification are in their infancy; and issues around data governance and data sharing may have a significant impact that is yet to be fully regulated.

Diagnostic applications of AI technological advances have exploded over the past ten years with multiple applications. Imaging studies such as breast mammograms or histological analysis rely on skilled scientists or clinicians performing repetitive tasks to manually identify abnormalities. An inaccurate diagnosis can have serious consequences for patient care. AI programmes can be trained to perform these tasks and have shown an accuracy in correctly diagnosing abnormalities which in some studies has been shown to be as accurate as a trained clinician. Further future applications in diagnostic imaging include the field of radiomics which extracts nuanced features peculiar to imaging modalities such as wavelength, texture and shape. This additional information can provide further data for diagnosis and prognostic indicators specific to patients.

With the potential application of AI within the healthcare setting, the question remains how will this impact the workforce? The fourth industrial revolution has 50% of companies to predict that by 2022, automation will decrease their numbers of full-time staff and that by 2030 robots will replace 800 million workers across the world (McKinsey Global Institute reports). Automation of clerical processes and care pathways could potentially impact on the non-clinical workforce within a healthcare setting. Specialties such as radiology where imaging reports can be automated and produced by AI algorithms may soon be the reality. The ethics of data sharing and the implications for patients and their insurers is a further area of controversy. We enter a brave new world.

Caroline B HingYasmin AntoniouAI for Goodwww.aiforgood.co.uk

This article is from issue 15 of Health Europa. Clickhere to get your free subscription today.

Read the original:
Artificial Intelligence in healthcare and clinical practice in the COVID era - Health Europa

Vatican Library Enlists Artificial Intelligence to Protect Its Digitized Treasures – Smithsonian Magazine

Since 2010, the Vatican Apostolic Library has worked to digitize its sprawling collection of more than 80,000 manuscripts, making a trove of rare historical treasures freely accessible to anyone with an internet connection.

But the tricky work of uploading the contents of the Roman Catholic Churchs historic library comes with new risks in the digital age. As Harriet Sherwood reports for the Observer, the library recently hired cybersecurity firm Darktrace to defend its digitized vault against attacks that could manipulate, delete or steal parts of the online collection.

Founded by University of Cambridge mathematicians, Darktrace uses artificial intelligence (A.I.) modeled on the human immune system to detect abnormal activity in the Vaticans digital systems, writes Brian Boucher for artnet News. On average, the A.I. system defends the library against 100 security threats each month, according to a Darktrace statement.

The number of cyber threats faced by the library continues to increase, its chief information officer, Manlio Miceli, tells the Observer. Threats to digital security come in many shapes and sizes, but Miceli notes that criminals can tamper with the librarys digitized files or conduct a ransomware attack, in which hackers effectively hold files ransom in exchange for a hefty sum.

While physical damage is often clear and immediate, an attack of this kind wouldnt have the same physical visibility, and so has the potential to cause enduring and potentially irreparable harm, not only to the archive but to the worlds historical memory, Miceli tells the Observer.

He adds, These attacks have the potential to impact the Vatican librarys reputationone it has maintained for hundreds of yearsand have significant financial ramifications that could impact our ability to digitize the remaining manuscripts.

Though the Vatican Library dates back to the days of the first Roman Catholic popes, little is known about the contents of its collections prior to the 13th century, per Encyclopedia Britannica. Pope Nicholas V (14471455) greatly expanded the collection, and by 1481, the archive held the most books of any institution in the Western world, according to the Library of Congress.

To date, about a quarter of the librarys 80,000 manuscripts have been digitized. As Kabir Jhala reports for the Art Newspaper, holdings include such treasures as Sandro Botticellis 15th-century illustrations of the Divine Comedy and the Codex Vaticanus, one of the earliest known copies of the Bible. Other collection highlights include notes and sketches by Michelangelo and the writings of Galileo.

The Vatican debuted the digitized version of its prized Vergilius Vaticanus in 2016. One of the few remaining illustrated manuscripts of classic literature, the fragmented text features Virgils Aeneid, an epic poem detailing the travels of a Trojan named Aeneas and the foundation of Rome. The ancient documentlikely crafted around 400 A.D. by a single master scribe and three paintersstill bears its vivid original illustrations and gilded lettering.

The library isnt the only section of the Vatican thats prone to cyber breaches. As the New York Times reported in July, Chinese hackers infiltrated the Holy Sees computer networks this summer ahead of sensitive talks in Beijing over the appointment of bishopspart of ongoing discussions that will determine how the Catholic Church operates in China.

The only way to make an organization completely secure is to cut it off from the internet, Miceli tells the Observer. Our mission is to bring the Vatican Library into the 21st centuryso we wont be doing that any time soon.

Like this article?SIGN UP for our newsletter

More:
Vatican Library Enlists Artificial Intelligence to Protect Its Digitized Treasures - Smithsonian Magazine

Getting the nitty-gritty of artificial intelligence right – BL on Campus

If you are a non-technical person learning a technical subject or trying to understand a technical field such as Artificial Intelligence (AI), let me share a simple tip that will help you a lot. Dont let the complex-sounding technical terms confuse you. In due course, you will get comfortable with terminologies if you focus on first principles and try to get an intuitive understanding of the concepts.

So, lets start with the basics and some working definitions. Ive noticed the specific phrase artificial intelligence and machine learning and its shorthand AI/ML being used quite often. They even have more than 15 million and 5 million Google search results respectively. Clearly, their usage is quite common.

But lets examine this phrase AI and ML. Such usage of both AI and ML in one single phrase is actually a marketing practice rather than any technical distinction. Artificial Intelligence is a broad field and Machine Learning is one of its branches. You can think of AI as the superset and ML as its subset. You wont say chai and masala chai when serving a single cup of tea or sell cars and red cars. But hey, using AI and ML makes one sound like an expert and is also good for search engine optimisation.

Next, let me provide working definitions of AI and ML and draw a distinction between them.

Whats intelligence, anyway?

Intelligence is the ability to understand, reason, and generalise. Artificial Intelligence is machines or software having this capability. Intelligence involves the ability for abstraction or generalisation (or in layman terms, common sense). Hence, this kind of AI is also known as Artificial General Intelligence (AGI). In 2020, it may come as a surprise to you but AGI is not on the table at all. We are nowhere close to AGI nor is it clear whether we will ever achieve AGI. Machines with malice, emotions or consciousness, presuppose AGI and are limited to science fiction and movies.

What we instead have is artificial narrow intelligence. Narrow intelligence is a machines ability to perform a single task very well. Examples of such tasks include deciphering handwriting, identifying images, and recognising spoken text. Early approaches, since the 1950s, to master such tasks involved codifying human expertise as rules for computers to follow. It wasnt possible to codify all rules and such rules-based expert systems worked well only in some limited scenarios.

Machine learning, a pattern recognition tool

A different approach is machine learning, where such rules are not explicitly programmed by humans, but the software is fed with large amounts of data to identify patterns and arrive at decision rules. Machine learning is where the software learns the examples it has been provided with and the learning refers the software becoming better with experience (with more data). In other words, machine learning is a great pattern recognition tool.

There are different types of machine learning methods which draw upon mathematics, probability, statistics and computer science for detecting these patterns. One particular set of machine learning techniques, popularly called deep learning, made rapid strides in recent years (and we will discuss deep learning in later columns) and is behind several modern machine learning applications.

These days when you see headlines such as AI solves X, AI-powered software or AI-enabled solution or my favourite AI/ML, it almost always refers to machine learning. Let me make two things clear. First, we made spectacular advances in machine learning in the last ten years. Second, it may not be AGI, but machine learning has a wide variety of uses for consumers, businesses, and governments.

When to use AI/ML

So, what are the takeaways from our discussion of AGI vs ML as you try to utilise ML in your organisation or business?

Machine learning is simply a pattern recognition powerhouse. It seems intelligent but does not have what we consider common sense. To use the example of an AI-powered TV camera which mistook the football referees bald head for the football and focused there instead of the actual match play, it was an amusing illustration of a mismatched pattern. No serious damage and everyone actually got a good chuckle out of this.

But in some situations, the mistakes are costly, even fatal. Take the case of a self-driving car being tested in Arizona in 2018. The algorithm knew to identify pedestrians and cyclists. But the data it was fed did not include a person pushing their cycle and walking alongside it. Arguably, a human driver would not have had a difficulty recognising the pedestrian. The algorithms failure to recognise the scenario contributed to an accident resulting in the pedestrians death.

As a manager who is looking to leverage AI, you should have a good grasp of the nature and scope of machine intelligence, its narrow scope of application, and be able to draw the boundaries beyond which AI will break down. Based on these, you can decide when to rely on AI and when not to. Good managers are expected to make decisions with imperfect data or limited data. AI cant do that!

See the original post here:
Getting the nitty-gritty of artificial intelligence right - BL on Campus