Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence requires trusted data, and a healthy DataOps ecosystem – ZDNet

Lately, we've seen many "x-Ops" management practices appear on the scene, all derivatives from DevOps, which seeks to coordinate the output of developers and operations teams into a smooth, consistent and rapid flow of software releases. Another emerging practice, DataOps, seeks to achieve a similarly smooth, consistent and rapid flow of data through enterprises. Like many things these days, DataOps is spilling over from the large Internet companies, who process petabytes and exabytes of information on a daily basis.

Such an uninhibited data flow is increasingly vital to enterprises seeking to become more data-driven and scale artificial intelligence and machine learning to the point where these technologies can have strategic impact.

Awareness of DataOps is high. A recent survey of 300 companies by 451 Research finds 72 percent have active DataOps efforts underway, and the remaining 28 percent are planning to do so over the coming year. A majority, 86 percent, are increasing their spend on DataOps projects to over the next 12 months. Most of this spending will go to analytics, self-service data access, data virtualization, and data preparation efforts.

In the report, 451 Research analyst Matt Aslett defines DataOps as "The alignment of people, processes and technology to enable more agile and automated approaches to data management."

The catch is "most enterprises are unprepared, often because of behavioral norms -- like territorial data hoarding -- and because they lag in their technical capabilities -- often stuck with cumbersome extract, transform, and load (ETL) and master data management (MDM) systems," according to Andy Palmer and a team of co-authors in their latest report,Getting DataOps Right, published by O'Reilly. Across most enterprises, data is siloed, disconnected, and generally inaccessible. There is also an abundance of data that is completely undiscovered, of which decision-makers are not even aware.

Here are some of Palmer's recommendations for building and shaping a well-functioning DataOps ecosystem:

Keep it open: The ecosystem in DataOps should resemble DevOps ecosystems in which there are many best-of-breed free and open source software and proprietary tools that are expected to interoperate via APIs." This also includes carefully evaluating and selecting from the raft of tools that have been developed by the large internet companies.

Automate it all:The collection, ingestion, organizing, storage and surfacing of massive amounts of data at as close to a near-real-time pace as possible has become almost impossible for humans to manage. Let the machines do it, Palmer urges. Areas ripe for automaton include "operations, repeatability, automated testing, and release of data." Look to the ways DevOps is facilitating the automation of the software build, test, and release process, he points out.

Process data in both batch and streaming modes. While DataOps is about real-time delivery of data, there's still a place -- and reason -- for batch mode as well. "The success of Kafka and similar design patterns has validated that a healthy next-generation data ecosystem includes the ability to simultaneously process data from source to consumption in both batch and streaming modes," Palmer points out.

Track data lineage: Trust in the data is the single most important element in a data-driven enterprise, and it simply may cease to function without it. That's why well-thought-out data governance and a metadata (data about data) layer is important. "A focus on data lineage and processing tracking across the data ecosystem results in reproducibility going up and confidence in data increasing," says Palmer.

Have layered interfaces. Everyone touches data in different ways. "Some power users need to access data in its raw form, whereas others just want to get responses to inquiries that are well formulated," Palmer says. That's why a layered set of services and design patterns is required for the different personas of users. Palmer says there are three approaches to meeting these multilayered requirements:

Business leaders are increasingly leaning on their technology leaders and teams to transform their organizations into data-driven digital entities that can react to events and opportunities almost instantaneously. The best way to accomplish this -- especially with the meager budgets and limited support that gets thrown out with this mandate -- is to align the way data flows from source to storage.

Continue reading here:
Artificial intelligence requires trusted data, and a healthy DataOps ecosystem - ZDNet

Adebayo Adeleke hosts Olusola Amusan, widely acclaimed as the Artificial Intelligence Evangelist on the third episode of Unfettered podcast. -…

In the third episode of the Unfettered Podcast, Adebayo Adeleke joined forces with the widely acclaimed Artificial Intelligence evangelist, Olusola Amusan to deliver one of the most profound conversations on Artificial Intelligence.

Aptly themed Artificial Intelligence in the 21st century, this episode skilfully introduces listeners to the world of Artificial Intelligence. What is more compelling about this episode is how Olusola reveals what individuals and governments can do to prepare for the disruption that Artificial Intelligence will bring in the coming decades.

According to Adebayo, the host, this episode is crucial because it gives listeners in-depth knowledge on automation, and the fourth industrial revolution and how these components will affect our everyday lives.

For more information about the podcast, visitwww.unfetteredpodcast.com. The episode is available onApple Music,Spotify,Google Podcasts,Castbox, andPodotron.

About Adebayo Adeleke

Adebayo Adeleke is an entrepreneur, retired U.S Army Major, and global thought leader. He is the Managing Partner at Pantote Solutions LLC (Dallas, TX), a Principal Partner, and Senior Supply Chain Consultant for Epot Consulting Limited and a Lecturer in Supply Chain Management at Sam Houston State University.

His unwavering desire to professionally mentor, and guide African immigrants led him to start the Rising Leadership Foundation, a 501(c) (3) non-profit organization that seeks to transform governance, and leadership using technology and mentoring in the Inner cities of Texas, African Immigrant communities and the continent of Africa.

For more information about Adebayo Adeleke and all his projects, kindly visitwww.adebayoadeleke.com/

About Adebayo Adeleke

Adebayo Adeleke is a dynamic thought leader with global insights on a broad array of issues. He is an entrepreneur, and retired U.S Army Major. He is the Managing Partner at Pantote Solutions LLC (Dallas, TX), a Principal Partner, and Senior Supply Chain Consultant for Epot Consulting Limited and a Lecturer in Supply Chain Management at Sam Houston State University.

His unwavering desire to professionally mentor, and guide African immigrants led him to start the Rising Leadership Foundation, a 501(c) (3) non-profit organization that seeks to transform governance, and leadership using technology and mentoring in the Inner cities of Texas, African Immigrant communities and the continent of Africa.

For more information about Adebayo Adeleke and all his projects, kindly visitwww.adebayoadeleke.com/

Excerpt from:
Adebayo Adeleke hosts Olusola Amusan, widely acclaimed as the Artificial Intelligence Evangelist on the third episode of Unfettered podcast. -...

Microsoft launches $40 million artificial intelligence initiative to advance global health research – seattlepi.com

Microsoft campus in Redmond.

Microsoft campus in Redmond.

Photo: Xinhua News Agency/Xinhua News Agency Via Getty Ima

Microsoft campus in Redmond.

Microsoft campus in Redmond.

Microsoft launches $40 million artificial intelligence initiative to advance global health research

Microsoft announced Wednesday that its newest $40 million investment in artificial intelligence (AI) will help advance global health initiatives, with two cash grants going to medical research at Seattle-based organizations.

As part of the tech giant's $165 million AI for Good initiative, this new public health branch will focus on three main areas: accelerating medical research around prevention and diagnosis of diseases, generating new insights about morality and global health crises, and improving health equity by increasing access to care for under-served populations.

"As a tech company, it is our responsibility to ensure that organizations working on the most pressing societal issues have access to our latest AI technology and the expertise of our technical talent," wrote John Kahan, Chief Data Analytics Officer at Microsoft in a company blog. "Through AI for Health, we will support specific nonprofits and academic collaboration with Microsofts leading data scientists, access to best-in-class AI tools and cloud computing, and select cash grants."

RELATED: Microsoft commits another $250 million for affordable housing. Here's where the money is going

One of the grants will go to Seattle Children's Hospital to continue their research on the causes and diagnosis of Sudden Infant Death Syndrome (SIDS). The Centers for Disease Control and Prevention estimated that 3,600 infants died in 2017 alone from SIDS.

Microsoft data scientists have already been working with researchers at Seattle Children's Hospital and discovered a correlation between maternal smoking and the fatal disease, estimating that 22 percent of the deaths from SIDS are attributed to smoking.

This research is personal for Kahan, who lost a son to SIDS.

"I saw firsthand, both personally and professionally, how you can marry artificial intelligence and medical research to advance this field, said Kahan in the program's launch event on Jan. 29. I saw because I lost my first son, and only son, to SIDS and I saw our head of data science partner with leading medical experts at Seattle Childrens and research institutes around the world."

Another grant will go towards Fred Hutchinson Cancer Research Center's Cascadia Data Discovery Initiative, which aims to accelerate cancer research by creating a data-sharing system for instructions and researchers across the Pacific Northwest to share biomedical data.

Other grants will benefit the Novartis Foundation for efforts to eliminate leprosy and Intelligent Retinal Imaging Systems to distribute diabetic retinopathy diagnostic software to prevent blindness.

These grants come as AI's rapidly growing role across industries is being debated by professionals, especially in medicine. Microsoft stated that less than 5% of AI professionals are operating in the health and nonprofit sector, leaving medical researchers with a shortage of talent and knowledge in the field.

RELATED:Feds release strategy for dealing with artificial intelligence - is it enough?

Technological innovations in AI are also moving faster than most doctors can prepare for. A recent study by Stanford Medicine found that only 7% of the 523 U.S. physicians surveyed thought they were "very prepared" to implement AI into their practice. The study called this a "transformation gap," citing that while most medical professionals can perceive the benefits of this technology for their patients, few feel prepared to adequately utilize it.

"Tomorrows clinicians not only need to be prepared to use AI, but they must also be ready to shape the technologys future development," the study states.

Other efforts in Microsoft's AI for Good initiative include AI for Earth, AI for Accessibility, AI for Cultural Heritage and AI for Humanitarian Action.

Read more from the original source:
Microsoft launches $40 million artificial intelligence initiative to advance global health research - seattlepi.com

NearShore Technology Talks About the Impact of Artificial Intelligence on Nearshoring – Science Times

(Photo : Bigstock) AI Learning and Artificial Intelligence Concept - Icon Graphic Interface showing computer, machine thinking and AI Artificial Intelligence of Digital Robotic Devices

The nearshoring technology industry is finding a rapid growth in demand from North American companies for engineers and data science services related to the advances that are coming through the ongoing artificial intelligence (AI) revolution. Companies find high value in working on new and sophisticated applications with nearshoring firms that are close in proximity, time zones, language, and business culture.

In recent years, the costs involved in offshoring have increased in relative comparison to nearshoring costs. Additionally, tech education opportunities in the Western hemisphere have become more advantageous. Western countries have far fewer holidays and lost workdays than in offshore countries as well. In this article, NearShore Technology examines current AI trends impacting nearshoring.

AI has been an active field for computer scientists and logicians for decades, and in recent years hardware and software capabilities have advanced to the stage allowing for the actual implementation of many AI processes. In general, AI describes the ability of a program and associated hardware to simulate human intelligence, reasoning, and analysis of real-world data. Logical algorithms are allowing for increased learning, logic, and creativity with AI processes. Increased technological capabilities are allowing AI to process information in quantities and with perceptive abilities that are beyond traditional human powers. Many industrial processes are finding great utility from machine learning, an AI-based process that allows technology systems to evolve and learn based on experience and self-development.

The huge tech companies that mainly focus on how customers use software programs are leading the way in AI development. Companies like Google, Amazon, and Facebook are positioning immense resources to advance their AI processes' abilities to understand and predict customer behavior. In addition to tech and retail firms, healthcare, financial services, and auto manufacturers (aiming at a future of autonomous cars) are all committing to developing effective AI tech. From routine activities such as customer support and billing to more intuition-based activities like investing and making strategic decisions, AI is becoming a central part of competing in almost every industry.

AI development requires experienced and skillful software engineers and programmers. The ability of an AI application to operate effectively is dependent first on the quantity and quality of data that it is provided. Algorithms must be able to perceive relevant data and also to learn and improve based on the data that is received. Programmers and engineers must be able to understand and facilitate algorithm improvement over time, as AI applications are never really completed and are constantly in development. Programmers must rely on a sufficient number of competent data scientists and analysts to sort and identify the nature and quality of information processed by an AI application to provide a meaningful understanding of how well the AI is functioning. The entire process is changing and progressing quickly, and the effectiveness of AI is determined by the abilities of the engineers and programmers who are involved.

Historically, many traditional IT services have been suited for offshoring. Most traditional IT and call center support services were routine, and the cost-efficiency of offshoring these processes around the world made economic sense in many situations. When skilled programming and data science are not a requirement, offshoring has had a place in the mix for many local companies. However, the worldwide shortage of skilled engineers and data scientists is most prevalent in the parts of the world normally used for offshore services.

Nearshoring AI technology development allows local companies to have meaningful and real-time relationships with programmers and data specialists who have the requisite skills needed. These nearshore relationships are vital to the ongoing nature of AI development.

Among the most important considerations of a successful nearshoring AI relationship are examining the actual skill and education of the nearshore firm's workers. A nearshore provider's team should be up to date with the latest technology developments and should have experience and a history of success in the relevant industry. As a process that depends on natural language use, it is important that AI developers are native or fluent speakers of the client company's language. Working with a nearshore firm that is proximately near in time and place also helps the firm to properly understand the culture and needs of a company's market and customers. A nearshore firm working on AI processes should feel like a complete partner and not just another outsourced provider of routine tasks.

NearShore Technology is a US firm headquartered in Atlanta with offices throughout North America. The company focuses on meeting all the technology needs of its clients. NearShore partners with technology officers and leaders to provide effective and timely solutions that fit each customer's unique needs. NearShore uses a family-based approach to provide superior IT, Medtech, Fintech, and related services to our customers and partners throughout North America.

View original post here:
NearShore Technology Talks About the Impact of Artificial Intelligence on Nearshoring - Science Times

Artificial Intelligence Suffers from The Biases of their Human Creators, that Causes Problems Searching for ET – Science Times

One structure on the dwarf planet Ceres made big news but there is a hitch. It seems that the square-shaped form inside a larger triangle, located in a crater. Everyone else saw it, but the use of artificial intelligence might be a square peg not fitting a round hole. This remark by a Spanish neuropsychologist is questioning the veracity of depending on AI, which might be unsound by SETI.

Ceres is located in the main asteroid belt, a dwarf planet, and the biggest object too. One of its craters, Occator had bright lights which lead to several ideas of what it was. Nasa sent the Dawn probe to go close enough to capture visual evidence of what these lights were and solve the mystery. These lights were from volcanic ice and salt eruptions, nothing more.

It gets more interesting as researchers based in the University of Cadiz (Spain) have examined images of these spots. Areas like them are called Vinalia Faculae, in an area where geometric contours are very evident for observers. It now serves as a template to compare how machines and humans perceive images on planetary surfaces in general. Tests like these will show artificial intelligence can see technosignatures of other lifeforms besides human-civilization.

During the test, more than one individual saw the squarish shape in Vinalia Faculae, and it was a perfect chance to test artificial intelligence on a human. Subjecting a human subject with what AI sees, is a comparison to see the result. In searching for ETs, radio signals aren't the only consideration but also captured images not like before.

To see further what their hypothesis would bring about. These neuropsychologists made more modifications from the previous experiments to dig deeper by adding another layer to it. Another batch of volunteers was conscripted, but this time were amateurs in astronomy to analyze what they see in the Occator image.

By comparison to the artificial vision system that is grounded on the convolutional neural networks (CNN), an AI taught to see squares and triangles to identify them. From this point on, the experiments got interesting as it progressed.

Researchers ran the experiment and this what the people saw, some peculiarities were observed, as the images were perceived.

a. AI and people saw the square structures and did not miss it.

b. Another thing is the AI did not fail to notice the triangle as well.

c. Whenever the triangle was pointed out to people, more mentioned seeing it.

d. Visually the square was inside the triangle as it was visually represented too.

This is what the neuropsychologists drew from the results of the experiment on the "amateurs". It was published in the "Acta Astronautica journal" for reference.

a. The application of artificial intelligence for use in finding ETs is not as foolproof. Just like human intelligence the AI can be mistaken, confused and make false perception as well.

b. AI can be applied for some tasks to find technosignatures and ETs in some exceptions. Overall, AI will not be implemented but with caution, especially in SETI.

c. The presence of biases in the programming of AI is that of their creators, which is unavoidable. The best move is to study artificial intelligence, when under human stewardship.

Originally posted here:
Artificial Intelligence Suffers from The Biases of their Human Creators, that Causes Problems Searching for ET - Science Times