Media Search:



Yan Cui and Team Are Innovating Artificial Intelligence Approach to Address Biomedical Data Inequality – UTHSC News

Yan Cui, PhD, associate professor in the UTHSCDepartment of Genetics, Genomics, and Informatics,recently received a $1.7 million grant from the National Cancer Institute for a study titled Algorithm-based prevention and reduction of cancer health disparity arising from data inequality.

Dr. Cuis project aims to prevent and reduce health disparities caused by ethnically-biased data in cancer-related genomic and clinical omics studies. His objective is to establish a new machine learning paradigm for use with multiethnic clinical omics data.

For nearly 20 years, scientists have been using genome-wide association studies, known as GWAS, and clinical omics studies to detect the molecular basis of diseases. But statistics show that over 80% percent of data used in GWAS come from people of predominantly European descent.

As artificial intelligence (AI) is increasingly applied to biomedical research and clinical decisions, this European-centric skew is set to exacerbate long-standing disparities in health. With less than 20% of genomic samples coming from people of non-European descent, underrepresented populations are at a severe disadvantage in data-driven, algorithm-based biomedical research and health care.

Biomedical data-disadvantage has become a significant health risk for the vast majority of the worlds population, Dr. Cui said. AI-powered precision medicine is set to be less precise for the data-disadvantaged populations including all the ethnic minority groups in the U.S. We are committed to addressing the health disparities arising from data inequality.

The project is innovative in the type of machine learning technique it will use. Multiethnic machine learning normally uses mixture learning and independent learning schemes. Dr. Cuis project will instead be using a transfer learning process.

Transfer learning works much the same way as human learning. When faced with a new task, instead of starting the learning process from scratch, the algorithm leverages patterns learned from solving a related task. This approach greatly reduces the resources and amount of data required for developing new models.

Using large-scale cancer clinical omics data and genotype-phenotype data, Dr. Cuis lab will examine how and to what extent transfer learning improves machine learning on data-disadvantaged cohorts. In tandem with this, the team aims to create an open resource system for unbiased multiethnic machine learning to prevent or reduce new health disparities.

Neil Hayes, MD, MPH, assistant dean for Cancer Reesearch in the UTHSC College of Medicine and director of the UTHSC Center for Cancer Research, and Athena Starlard-Davenport, PhD, associate professor in the Department of Genetics, Genomics, and Informatics, are co-Investigators on the grant. Yan Gao, PhD, a postdoctoral scholar working with Dr. Cui, is a machine learning expert in the team. A pilot study for this project, funded by the UT Center for Integrative and Translational Genomics and UTHSC Office of Research, has been published in Nature Communications.

Related

See the original post here:
Yan Cui and Team Are Innovating Artificial Intelligence Approach to Address Biomedical Data Inequality - UTHSC News

Gradient AI and Merlinos Team to offer State-of-the-Art Artificial Intelligence and Expert Actuarial and Insurance Industry Consulting Services -…

BOSTON--(BUSINESS WIRE)--Gradient AI, the leading enterprise software provider of artificial intelligence solutions in the insurance industry, recently announced that it has partnered with Merlinos & Associates to incorporate state-of-the-art Artificial Intelligence (AI) and Machine Learning (ML) solutions with expert actuarial and insurance industry consulting services.

Layering Merlinos & Associates' expert consulting services on top of Gradients high-precision models accelerates an insurance organizations path to implementation, and more importantly accelerates their path to optimization and achieving measurable return on their investment.

Gradient AI and Merlinos & Associates are especially excited to provide this joint offering which leverages both organizations' deep subject matter expertise in the PEO industry in order to provide the most comprehensive and holistic predictive risk management solution offering in the industry built specifically for PEOs and other risk sharing organizations.

Speed and accuracy in both risk assessment and pricing have become paramount for insurance companies, MGUs, and Professional Employer Organizations (PEOs), said Stan Smith, Founder & CEO, Gradient AI. Were excited about the combination of our AI/ML predictive analytics solutions with Merlinos industry leading actuarial expertise and operationalization capabilities as this will facilitate improved decision-making, faster responses and measurable improvements for our mutual clients.

The consultants at Merlinos & Associates, through their actuarial, modeling, and industry experience, help risk takers in the insurance industry to maximize the value they can derive when deploying Gradients AI predictions within their underwriting and claims operations. The combined expertise of Gradient and Merlinos delivers industry leading planning, operations, deployment, and ongoing measurement of an organizations results based on the clients use of AI.

"We became aware of Gradient AI through our consulting work in the PEO industry, and we quickly realized that there was a great match between their skill set and ours," says Paul Merlino, President of Merlinos & Associates. "We are delighted to team with Gradient and help to expand the use of their tools to the insurance industry."

About Gradient AI

Gradients artificial intelligence solutions help risk takers in the insurance industry automate and improve underwriting results, reduce claim costs, and improve operational efficiencies. The Gradient software-as-a-service (SaaS) platform boasts a proprietary dataset comprised of tens of millions of claims, which is complemented with dozens of economic, health, geographic and demographic datasets. This robust aggregation of data can provide demonstrable value for both underwriting and claims clients, across all major lines of insurance, and are utilized by many of the most recognized insurance carriers, MGAs, TPAs, pools, PEOs, and more. Gradient focuses exclusively on delivering measurable results for their clients. To learn more about Gradient, please visit: https://www.gradientai.com.

About Merlinos & Associates

Merlinos & Associates delivers traditional actuarial services to a wide range of domestic and international clients, including primary insurers, reinsurers, municipalities, state insurance departments, law firms, examination firms, audit firms, MGAs, PEOs, self-insured entities and groups, captives, and risk retention groups. Merlinos & Associates handle virtually all lines of property, casualty, and health insurance. In addition, Merlinos & Associates offer a wide range of expanded services, including predictive analytics, monitoring and evaluation of financial condition of insurers, actuarial feasibility studies, self-insurance & risk management strategies, and much more. To learn more about Merlinos & Associates, please visit: http://merlinosinc.com/.

More here:
Gradient AI and Merlinos Team to offer State-of-the-Art Artificial Intelligence and Expert Actuarial and Insurance Industry Consulting Services -...

Unlocking the power of data with artificial intelligence – TechRadar

Data is the lifeblood of business it drives innovation and enhances competitiveness. However, its importance was brought to the fore by the pandemic as lockdowns and social distancing drove digital transformation like never before.

About the author

Andrew Brown, General Manager, Technology Group, IBM United Kingdom & Ireland.

Forward-thinking businesses have started to grasp the importance of their data; they understand the consequences of not fully mobilizing it, but many are sat at the start of their journey.

Even the best organizations are failing to extract the maximum benefits from their data while keeping it safe. This is where artificial intelligence (AI) comes into play it can benefit enterprises with their data in three fundamental ways.

First, without the right tools it is impossible to unlock datas hidden value. For that to happen businesses need to deploy AI because of its ability to analyze complex datasets and produce actionable insights. These can significantly enhance business agility and improve the foresight of enterprises of all sizes.

The success of any move to adopt AI will depend on a robust IT infrastructure being in place. Transforming data into useful information is only possible with this solid foundation, which in turn allows advanced AI applications to extract the real value locked inside the data.

During the first wave of the pandemic, IBM worked with The Royal Marsden, a world-leading cancer hospital, to launch an AI virtual assistant to alleviate some of the pressures and uncertainty for staff associated with COVID-19. The system depended on fast access to trusted information from various diverse sources, such as the hospitals official policy handbook as well as data from NHS England. By tapping into these rich knowledge sources, staff were able to get quicker answers to workplace queries while the HR team had more time to handle complex requests.

Another issue is that far too many businesses simply dont know how much data they own. Split up into silos, it can be impossible to gain a clear view of not only what data is available but also where it resides. Removing this bottleneck can also be achieved through the implementation of AI. This is important because incomplete data will result in incomplete insights.

Businesses should prioritize making all data sources as simple and accessible as possible. Cloud computing technologies, such as hybrid data management, have a vital role to play here. Adoption makes it possible to manage all data types across multiple sources and locations, effectively breaking down these silos and a major barrier to AI adoption.

IBM has partnered with Wimbledon for more than 30 years, helping the worlds leading tennis tournament get the most from its data. Tapping into a wealth of new and archived footage, player data and historical records, fans can now benefit from personalized recommendations and highlights reels. Created through a rules-based recommendation engine integrated across Wimbledons digital platforms, this personalized content allows fans to track their favorite players through the tournament as well as receive suggestions on emerging talent to follow.

This is all made possible by the hybrid cloud the data spans a combination of on-premises systems, private clouds, and public cloud. Breaking down these silos has allowed Wimbledon to innovate at pace to attract new global audiences.

While extracting value from data is undoubtedly beneficial for organizations, it also creates risks. Criminals are increasingly aware of the potential to exploit vulnerabilities to disrupt operations or cause reputational issues through leaking sensitive data. The threat landscape is evolving and rising data breach costs are a growing problem for businesses in the wake of the rapid technology shifts triggered by the pandemic.

Over the last year businesses were forced to quickly adapt their technology approaches, with many companies encouraging or requiring employees to work from home, and 60% of organizations moved further into cloud-based activities during the pandemic.

According to the latest annual Cost of a Data Breach report, conducted by Ponemon Institute and analyzed by IBM Security, serious security incidents now cost UK-based organizations an average of $4.67 million (around 3.4 million) per incident, the highest cost in the 17-year history of the report. This is higher than the global average of $4.24 million per incident, highlighting the importance of protecting data for British businesses.

AI has a role to play here, and the study revealed encouraging signs about the impact of intelligent and automated security tools. While data breach costs reached a record high over the past year, the report also showed positive signs about the impact of modern cybersecurity tactics, such as AI and automation which may pay off by reducing the cost of these incidents further down the line.

The adoption of AI and security analytics were in the top five mitigating factors shown to reduce the cost of a breach. On average, organizations with a fully deployed security automation strategy faced data breach costs of less than half of those with no automation technology in place.

The sector in which a business operates also has a direct impact on the overall cost of a security breach. The report identified that the average cost of each compromised record containing sensitive data was highest for UK organizations in Services (191 per record), Financial (188) and Pharmaceuticals (147). This highlights how quickly the costs of a breach can escalate if a large number of records are compromised.

The Cost of a Data Breach report highlights a number of trends and best practices that were consistent with an effective response to security incidents. These can be adopted by organizations of all types and sizes and contribute to form the basis of a data management and governance strategy:

1. Invest in security orchestration, automation and response (SOAR). Security AI and automation significantly reduce the time to identify and respond to a data breach. By deploying SOAR solutions alongside your existing security tools, its possible to accelerate incident response and reduce overall costs associated with breaches.

2. Adopt a zero trust security model to help prevent unauthorized access to sensitive data. Organizations with mature zero trust deployments have far lower breach costs than those without. As businesses move to remote working and hybrid cloud environments, a zero trust strategy can help protect data by only making it accessible in the right context.

3. Stress test incident response plans to improve resilience. Forming an Incident Response team, developing a plan and putting it to the test are crucial steps to responding quickly and effectively to attacks.

4. Invest in governance, risk management and compliance. Evaluating risk and tracking compliance can help quantify the cost of a potential breach in real terms. In turn this can expedite the decision-making process and resource allocation.

5. Protect sensitive data in the cloud using policy and encryption. Data classification schema and retention policies should help minimize the volume of the sensitive information that is vulnerable to a breach. Advanced data encryption techniques should be deployed for everything that remains.

So how should a business bring its AI strategy to life? First, organizations must ensure their infrastructure is equipped to handle all the data, processing and performance requirements needed to effectively run AI. If you use your existing storage arrangement without modernizing it, you greatly increase your risk of failure. A hybrid cloud implementation is likely to be the best solution in most instances as it offers the optimum flexibility.

Enterprises should also directly embed AI into their data management and security systems, which should have clearly defined data policies to ensure appropriate levels of access and resilience. The data management system and the data architecture should be optimized for added agility and ease of operation.

A fully featured AI implementation doesnt just aggregate data and perform largescale analytics, it also enhances security and governance. Together they enable companies to create valuable business insights that fuel innovation. AI will also help ensure that data if used more efficiently and minimize data duplication. But above all, properly managed data is the lifeblood of enterprise a resource that needs to be identified and protected. Only then can companies start to climb the AI ladder.

Read more:
Unlocking the power of data with artificial intelligence - TechRadar

Museum Of Wild And Newfangled Art’s Opening Exhibition Curated By Artificial Intelligence – Broadway World

The Museum of Wild and Newfangled Art (mowna) will open their final show of the year "This Show is Curated by a Machine" on September 23, 2021.

The Artificial Intelligence curated exhibition opens with a talk on the development of the AI model followed by a Q&A with the AI Team: IV (Ivan Pravdin) and museum co-founders cari ann shim sham* and Joey Zaza. "This Show is Curated by a Machine" runs September 23, 2021 through January 31, 2022 tickets bought prior to opening day, September 23rd, include entrance to AI talk and are available at: https://www.mowna.org/museum/this-show-is-curated-by-a-machine

Earlier this year, The Whitney Museum of American Art commissioned and exhibited the work "The Next Biennial Should Be Curated by a Machine" for their online artport. In response the Museum of Wild and Newfangled Art has designed an artificial intelligence curator that will not only redefine how we look at curation and AI but will also underscore the need to move forward with AI curation in an ethical way.

The artificial intelligence model was trained on image sets from various sources, including the Museum of Modern Art, the Art Institute of Chicago, and the mowna Biennial submissions, an exhibit of around 88 International Artists from 44 countries.

"Curation is very subjective. It's my hope through the development of an AI curator that we can allow for equity and diversity, and eliminate some biases," says cari ann shim sham*.

Artists in the show include Alice Prum, a London based artist whose work explores the invisible relationships between space, the body, and technology. Bridget DeFranco is an east coast media artist working against the high-stimulation nature of the screen. Avideh Salmanpour is an Iranian artist whose paintings explore the bewilderment of contemporary man and the attempt to find a new way.

The artificial intelligence curator was created by multiple artists. IV is a post-contemporary artist working with various artificial intelligence and neural networking techniques. cari ann shim sham* is the co-founder and curator of mowna, a wild artist working at the intersection of dance and technology, and an associate arts professor of dance and technology at NYU Tisch School of the Arts. Joey Zaza is the co-founder and curator of mowna, and works in photography, software, video, sound, and installation. They combined forces to explore the potential of using artificial intelligence in art curation. The team's initial thoughts, strategies and questions in the development of the AI model can be found on mowna's blog.

Human curation is also included alongside the AI curation for "This Show is Curated by a Machine" to offer a comparison. Text written by the team will explain why or why not they think the AI chose the work. This show is a successful completion of phase one of mowna's AI model which ranks and curates a show using image based files. mowna will release a paper with its phase one research and findings to the public. With this data the team will enter into phase two development for the AI's ability to curate sound and video files.

"This Show is Curated by a Machine" will be installed and available for viewing on September 23, 2021 and marks the third online art exhibition by mowna. The second, the 2021 mowna Biennial, showcases art of all mediums and focuses on exhibiting art that might have otherwise gone unseen due to gaps in the post-pandemic art world. It is currently still on exhibit until September 22, 2021 and can be viewed on the mowna website. Tickets are a sliding scale of pay-what-you-wish.

mowna makes it their priority to showcase a broad range of art and is committed to diversity in every way. It provides an international online platform for the most timely, diverse, and preeminent artists. At the center of the constantly changing and expanding art world, mowna showcases a mixture of the familiar and unfamiliar. Members will have the opportunity to see artists who have been curated by the MoMA or the Whitney alongside artists available only on mowna.

As the global landscape shifts towards a more technological way of being, mowna is there to meet the needs of an ever-changing art world. The Museum of Wild and Newfangled Art was formed to feature the newest art developments and make art of all mediums accessible to everyone. And it unmistakably builds on that foundation with the upcoming exhibition "This Show is Curated by a Machine".

For more information on current and upcoming exhibitions and events, please visit mowna's new pages on Instagram and Facebook (below) as well as the museum's official website.

Continued here:
Museum Of Wild And Newfangled Art's Opening Exhibition Curated By Artificial Intelligence - Broadway World

Federal Court Rules That Artificial Intelligence Cannot Be An Inventor Under The Patent Act – JD Supra

Although this blog typically focuses on decisions rendered in intellectual property and/or antitrust cases currently in or that originated in the United States District Court for the District of Delaware or are in the Federal Circuit, every now and then there is a decision rendered in another federal trial or appellate court that is significant enough it warrants going beyond the normal boundaries. The recent decision rendered by The Honorable Leonie M. Brinkema, of the United States District Court for the Eastern District of Virginia, in Thaler v. Hirshfeld et al., Civil Action No. 1:20-cv-903-LMB (E.D.Va. September 2, 2021), is such a decision.

In Thaler, the Court confronted, analyzed and answered the question of can an artificial intelligence machine be an inventor under the Patent Act? Id. at *1. After analyzing the plain statutory language of the Patent Act and the Federal Circuit authority, the Court held that the clear answer is no. Id. In reaching its holding, the Court found that Congress intended to limit the definition of inventor to natural persons which means humans not artificial intelligence. Id. at *17. The Court noted that, [a]s technology evolves, there may come a time when artificial intelligence reaches a level of sophistication such that it might satisfy accepted meanings of inventorship. But that time has not yet arrived, and, if it does, it will be up to Congress to decide how, if at all, it wants to expand the scope of patent law. Id. at *17-18.

A copy of the Memorandum Opinion is attached.

[View source.]

More here:
Federal Court Rules That Artificial Intelligence Cannot Be An Inventor Under The Patent Act - JD Supra