Archive for November, 2020

PathAI and Gilead Report Data from Machine Learning Model Predictions of Liver Disease Progression and Treatment Response at AASLD’s The Liver Meeting…

BOSTON (PRWEB) November 06, 2020

PathAI, a global provider of AI-powered technology applied to pathology research, today announced the results of a research collaboration with Gilead that retrospectively analyzed liver biopsies from participants in clinical trials evaluating treatments for NASH or CHB (1). Using digitized hematoxylin and eosin (H&E)-, picrosirius red-, and trichrome-stained biopsy slides, PathAIs machine learning (ML) models were able to accurately predict changes in features traditionally used as markers for liver disease progression in clinical practice and clinical trials, including fibrosis, steatosis, hepatocellular ballooning, and inflammation. The new results will be presented in an oral presentation and 4 poster sessions at The Liver Meeting Digital Experience (TLMdX) that will be held from November 13-16, 2020.

The data builds upon PathAIs previous success in retrospectively staging liver biopsies from clinical trials by showing that ML models may uncover patterns of histological features that correlate with disease progression or treatment response. Furthermore, ML models were able to estimate the hepatic venous pressure gradient (HVPG) in study subjects with NASH related cirrhosis and quantify fibrosis heterogeneity from digitized slides, which are measures that are not reliably captured by traditional pathology methods. After appropriate clinical validation, these new tools could be useful in staging disease more accurately than can be done with current approaches.

"We continue to use machine learning to advance our understanding of liver diseases, including NASH and hepatitis B, as a foundation for developing new methods to track disease progression and assess response to therapeutics, said PathAI co-founder and Chief Executive Officer Andy Beck MD, PhD. Our long-standing partnership with Gilead continues to demonstrate the power of AI-based pathology to support development efforts to bring new therapies to patients."

Highlights include:

Data presented at AASLD demonstrate the potential of machine learning approaches to improve our assessment of liver disease severity, reduce the variability of human interpretation of liver biopsies, and identify novel features associated with disease progression, said Rob Myers, MD, Vice President, Liver Inflammation/Fibrosis, Gilead Sciences. We are proud of our ongoing partnership with PathAI and look forward to continued collaboration toward our shared goals of enhancing research efforts and improving outcomes of patients with liver disease.

The antiviral drug TDF effectively suppresses hepatitis B virus in patients with CHB, but a small subset of patients have persistently elevated serum ALT despite virologic suppression. ML-models were applied to biopsy data from registrational studies of TDF to examine this small subgroup of non-responders. Analyses of the ML-model predicted histologic features showed that persistently elevated ALT after five years of TDF treatment is associated with a higher steatosis score at BL and increases in steatosis during follow-up. These data suggest that subjects with elevated ALT despite TDF treatment may have underlying fatty liver disease that impacts biochemical response.Machine Learning Enables Quantitative Assessment of Histopathologic Signatures Associated with ALT Normalization in Chronic Hepatitis B Patients Treated with Tenofovir Disoproxil Fumarate (TDF) Oral Abstract #18

ML-models were deployed on biopsies from registrational trials of TDF in CHB to identify cellular and tissue-based phenotypes associated with HBV DNA and hepatitis B e-antigen (HBeAg). The study demonstrated that proportionate areas of ML-model-predicted hepatocellular ballooning at BL and Yr 5, and lobular inflammation at Yr 5 were higher in subjects that did not achieve virologic suppression. In addition, lymphocyte density across the tissue and within regions of lobular inflammation correlated with HBeAg loss, supporting the importance of an early immune response for viral clearance.Machine Learning Based Quantification of Histology Features from Patients Treated for Chronic Hepatitis B Identifies Features Associated with Viral DNA Suppression and dHBeAg Loss Poster Number #0848

Standard manual methods for staging liver fibrosis have limited sensitivity and reproducibility. Application of a ML-model to evaluate changes in fibrosis in response to treatment in the STELLAR and ATLAS trials enabled development of the DELTA (Deep Learning Treatment Assessment) Liver Fibrosis Score. This scoring method accounts for the heterogeneity in fibrosis severity that can be detected by ML-models and reflects changes in fibrotic patterns that occur in response to treatment. Application of the DELTA Liver Fibrosis Score to biopsies from the Phase 2b ATLAS trial demonstrated a reduction in fibrosis in response to treatment with the investigational combination of cilofexor and firsocostat that was not detected by standard staging methods. Validation of a Machine Learning-Based Approach (DELTA Liver Fibrosis Score) for the Assessment of Histologic Response in Patients with Advanced Fibrosis Due to NASH Poster Number #1562

Integration of tissue transcriptomic data with histologic information is likely to reveal new insights into disease. Using liver tissue obtained during the STELLAR trials evaluating NASH subjects with advanced fibrosis, RNA-seq-generated, tissue-level gene expression profiles were integrated with ML-predicted histology. This analysis revealed five key genes strongly correlated with proportionate areas of portal inflammation and bile ducts, features that are themselves predictive of disease progression in NASH. High levels of expression of these genes was associated with an increased risk of progression to cirrhosis in subjects with bridging (F3) fibrosis (hazard ratio [HR] 2.1; 95% CI 1.25, 3.49) and liver-related clinical events among those with cirrhosis (HR 4.05; 95% CI 1.4, 14.36). Integration of Machine Learning-Based Histopathology and Hepatic Transcriptomic Data Identifies Genes Associated with Portal Inflammation and Ductular Proliferation as Predictors of Disease progression in Advanced Fibrosis Due to NASH Poster Number #595

The severity of portal hypertension as assessed by HPVG predicts the risk of hepatic complications in patients with liver disease but is not simple to measure. ML-models were trained on images of 320 trichrome-stained liver biopsies from a phase 2b trial of investigational simtuzumab in subjects with compensated cirrhosis due to NASH to recognize patterns of fibrosis that correlate with centrally-read HVPG measurements. Deployed on a test set of slides, ML-calculated HVPG scores strongly correlated with measured HVPG and could discriminate subjects with clinically-significant portal hypertension (HVPG 10 mm Hg).A Machine Learning Model Based on Liver Histology Predicts the Hepatic Venous Pressure Gradient (HVPG) in Patients with Compensated Cirrhosis Due to Nonalcoholic Steatohepatitis (NASH) Poster Number #1471

(1) Trials include STELLAR, ATLAS, and NCT01672879 for investigation of NASH therapies, and registrational studies GS-US-174-102/103 for tenofovir disoproxil fumarate [TDF] for CHB.

About PathAIPathAI is a leading provider of AI-powered research tools and services for pathology. PathAIs platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit pathai.com.

Share article on social media or email:

See the original post here:
PathAI and Gilead Report Data from Machine Learning Model Predictions of Liver Disease Progression and Treatment Response at AASLD's The Liver Meeting...

AI Recognizes COVID-19 in the Sound of a Cough Machine Learning Times – The Predictive Analytics Times

Originally published in IEEE Spectrum, Nov 4, 2020.

Based on a cellphone-recorded cough, machine learning models accurately detect coronavirus even in people with no symptoms.

Again and again, experts have pleaded that we need more and faster testing to control the coronavirus pandemicand many have suggested that artificial intelligence (AI) can help. Numerous COVID-19 diagnostics in development use AI to quickly analyze X-ray or CT scans, but these techniques require a chest scan at a medical facility.

Since the spring, research teams have been working toward anytime, anywhere apps that could detect coronavirus in the bark of a cough. In June, a team at the University of Oklahoma showed it was possible to distinguish a COVID-19 cough from coughs due to other infections, and now a paper out of MIT, using the largest cough dataset yet, identifies asymptomatic people with a remarkable 100 percentdetection rate.

If approved by the FDA and other regulators, COVID-19cough apps, in which a person records themselves coughing on command,could eventually be used for free, large-scale screening of the population.

With potential like that, the field is rapidly growing: Teams pursuing similar projects include a Bill and Melinda Gates Foundation-funded initiative, Cough Against Covid, at the Wadhwani Institute for Artificial Intelligence in Mumbai; the Coughvid project out of the Embedded Systems Laboratory of the cole Polytechnique Fdrale de Lausanne in Switzerland; and the University of Cambridges COVID-19 Sounds project.

The fact that multiple models can detect COVID in a cough suggeststhat there is no such thing astruly asymptomatic coronavirus infectionphysical changes alwaysoccurthat change the way a person produces sound. There arent many conditions that dont give you any symptoms, says Brian Subirana, director of the MIT Auto-ID lab and co-author on the recent study, published in the IEEE Open Journal of Engineering in Medicine and Biology.

To continue reading this article, click here.

Read the rest here:
AI Recognizes COVID-19 in the Sound of a Cough Machine Learning Times - The Predictive Analytics Times

Professor in Computer Vision and Machine Learning job with CITY, UNIVERSITY OF LONDON | 232985 – Times Higher Education (THE)

CITY, UNIVERSITY OF LONDON

School of Mathematics, Computer Science & EngineeringComputer Science

Professor in Computer Vision and Machine Learning

SALARY: Competitive

Founded in 1894, City, University of London is a global university committed to academic excellence with a focus on business and the professions and an enviable central London location.

City attracts around 20,000 students (over 40% at postgraduate level) from more than 150 countries and staff from over 75 countries.

In the last decade City has almost tripled the proportion of its total academic staff producing world-leading or internationally excellent research.

Led by President, Professor Sir Paul Curran, City has made significant investments in its academic staff, its estate and its infrastructure and continues to work towards realising its vision of being a leading global university.

The School of Mathematics, Computer Science & Engineering is a multi-disciplinary centre of research and education located in the heart of Londons vibrant design community. It is proud of its research advances and of educating thousands of undergraduates and postgraduates in STEM subjects.

The Department of Computer Science has been at the leading edge of computer science in the UK for six decades. It awarded some of the countrys first Computer Science degrees and laid the groundwork for the foundation of the British Computer Society. Today, it is a vibrant, modern department comprising approximately 50 academic staff and 60 research staff and PhD students.

The School is seeking to appoint a Professor in Computer Vision and Machine Learning who will join the Research Centre for Adaptive Computing Systems and Machine Learning (ACS-ML) and collaborate closely with Tesco plc on research for the retail sector. The appointed candidate will lead and foster excellent research; contribute to the delivery of high quality undergraduate and postgraduate education in core Computer Science; and play a lead role in developing the partnership with Tesco and strengthening expertise in Computer Vision for the retail sector.

The successful candidate will have a PhD in Computer Science or an area related to machine learning, artificial intelligence or computer vision; an internationally recognised reputation in such an area; a track record of world-leading or internationally excellent research; and experience of delivering high quality education in core Computer Science. A track record of generating research income and of delivering consultancy or specialist services to external clients will also be required.

City offers a sector-leading salary, pension scheme and benefits including a comprehensive package of staff training and development.

The role is available immediately.

Closing date: Friday 11th December 2020

Interviews are scheduled for January 2021

For a confidential discussion, please contact Imogen Wilde on +44 (0)7864 652 633 or Elliott Rae on +44 (0)7584 078 534.

For further information, please visit http://www.andersonquigley.com/city-prof

Actively working to promote equal opportunity and diversityAcademic excellence for business and the professions

Read more from the original source:
Professor in Computer Vision and Machine Learning job with CITY, UNIVERSITY OF LONDON | 232985 - Times Higher Education (THE)

Post Covid-19 Impact on Machine Learning in Communication Sales, Price, Revenue, Gross Margin and Market Share 2020 to 2026 Amazon, IBM, Microsoft,…

The global Machine Learning in Communication Market has been studied by a set of researchers for a defined forecast period of 2020 to 2026. This study has provided insights to the stakeholders in the market landscape. It includes an in-depth analysis of various aspects of the market. These aspects include an overview section, with market segmentation, regional analysis, and competitive outlook of the global Machine Learning in Communication Market for the forecast period. All these sections of the report have been analyzed in detail to arrive at accurate and credible conclusion of the future trajectory. This also includes an overview section that mentions the definition, classification, and primary applications of the product/service to provide larger context to the audience to this report.

Key Players

The global Machine Learning in Communication Market report has provided a profiling of significant players that are impacting the trajectory of the market with their strategies for expansion and retaining of market share. The major vendors covered: Amazon, IBM, Microsoft, Google, Nextiva, Nexmo, Twilio, Dialpad, Cisco, RingCentral, and more

Get a free sample copy @ https://www.reportsandmarkets.com/sample-request/global-machine-learning-in-communication-market-2019-by-company-regions-type-and-application-forecast-to-2024?utm_source=icotodaymagazine&utm_medium=39

Market Dynamics

The report on the global Machine Learning in Communication Market includes a section that discusses various market dynamics that provide higher insight in the relationship and the impact of change these dynamics hold on the market functioning. These dynamics include the factors that are providing impetus to the market over the forthcoming years for growth and expansion. Alternatively, it also includes factors that are poised to challenge the market growth over the forecast period. These factors are expected to reveal certain hidden trends that aid in the better understanding of the market over the forecast period.

Market Segmentation

The global Machine Learning in Communication Market has been studied for a detailed segmentation that is based on different aspects to provide insight in the functioning of the segmental market. This segmentation has enabled the researchers to study the relationship and impact of the growth chart witnessed by these singular segments on the comprehensive market growth rate. It has also enabled various stakeholders in the global Machine Learning in Communication Market to gain insights and make accurate relevant decisions. A regional analysis of the market has been conducted that is studied for the segments of North America, Asia Pacific, Europe, Latin America, and the Middle East & Africa.

Research Methodology

The global Machine Learning in Communication Market has been analyzed using Porters Five Force Model to gain precise insight in the true potential of the market growth. Further, a SWOT analysis of the market has aided in the revealing of different opportunities for expansion that are inculcated in the market environment.

If you have any special requirements about this Machine Learning in Communication Market report, please let us know and we can provide custom report.

Inquire more about this report @ https://www.reportsandmarkets.com/enquiry/global-machine-learning-in-communication-market-2019-by-company-regions-type-and-application-forecast-to-2024?utm_source=icotodaymagazine&utm_medium=39

About Us

ReportsAndMarkets.com allocates the globally available market research and many company reports from reputed market research companies that are a pioneer in their respective domains. We are completely an autonomous group and serve our clients by offering the trustworthy available research stuff, as we know this is an essential aspect of Market Research.

Contact Us

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Ph: +1-352-353-0818 (US)

Go here to read the rest:
Post Covid-19 Impact on Machine Learning in Communication Sales, Price, Revenue, Gross Margin and Market Share 2020 to 2026 Amazon, IBM, Microsoft,...

Jon Koncak Once Made More Money Than Michael Jordan and Larry Bird, Earning the Nickname Jon Contract – Sportscasting

Jon Koncak wasnt a household name as a 7-foot center in the NBA. Prior to his professional career with the Atlanta Hawks, Koncak was a high school standout and a college star at SMU. He was also a teammate of Michael Jordan on the 1984 U.S. Olympic basketball squad. Although he never outplayed Jordan, he outearned him for a bit in the NBA, earning the nickname Jon Contract.

RELATED: NBA Bust Chris Washburn Fell so Far He Couldnt Even Start for His Prison Team

When Jon Koncak was in high school, he was a heavily-recruited 7-footer. In high school, he grew about three inches per year and hit the seven-foot mark by the time he graduated. Koncak was a talented basketball player recruited by big-time schools but wound up signing with Southern Methodist University.

At the time, SMU was in rebuilding mode under head coach Dave Bliss. Koncak, showing his sense of humor, explained to the Atlanta Journal-Constitution in 2019 why he chose SMU. Well, SMU coach Dave Bliss was the only one that could spell my name. All the other coaches spelled it John.

No, really, the Missouri coach called and just expected me to sign with them because I was from Kansas City. SMU, meanwhile, sent an assistant coach to, I think, every game my senior year. It was neat being part of a program that we brought into the forefront.

It was a tough go early at SMU as the Mustangs went 6-21 and 1-15 in conference play. Jon Koncak averaged 10 points per game as a freshman, but by the time he was a junior, Koncak and the Mustangs were rolling. As a junior, Koncak averaged 15.5 points and 11.5 rebounds per game. The Mustangs went 25-8.

I was recruited by 80 or 90 schools and I go to this place that had 4,000 kids and we won just six games my freshman year, Koncak told the AJC. We beat all the powerhouses and my junior year (1984), we were one point away from beating Georgetown and Patrick Ewing and they would go on to win the national championship.

In his senior year, SMU went 23-10 and Koncak put up 17.2 points and 10.7 rebounds per game. Koncak went on to become a first-round pick in the 1985 NBA draft. The Atlanta Hawks selected him with the fifth overall pick.

Jon Koncak spent 11 years in the NBA primarily as a backup center. In his rookie season, he averaged a career-best 8.6 points per game. He never averaged better than 5.7 points in any of the 10 other seasons. In 1989, however, Koncak signed an unheard-of six-year, $13 million contract for a backup center. According to the Atlanta Journal-Constitution, Koncak at one point was making more money than superstars Michael Jordan, Larry Bird, and Magic Johnson.

Koncak remembers why he got the big deal. During my fourth year, Kevin Willis had broken his foot and was out and (coach) Mike Fratello didnt want to play Moses (Malone) and I at the same time, he told the AJC. So he started Cliff Levingston. But one day, Cliff was late to practice and it ticked off Fratello. He came up to me and asked if I could play the No. 4 spot. I did and in the first game, I scored only eight points but had 20 rebounds. He kept me in the lineup and we went 13-3 and in a couple of those games, I scored 20 points. I finally felt things were going my way and the Pistons offered me the offer sheet and then came the huge offer from the Hawks. The first game the next season when I took my first shot in Atlanta, the fans booed. I had been a fan favorite before the contract. I became the scapegoat for what was happening to the Hawks.

Koncak knew it was a big contract, but he wasnt about to turn it down. Hey, I cant justify what they offered me, he told Sports Illustrated during his playing days. But what was I supposed to do? Say no? The league is changing. I think maybe this is just the start.

Read the original post:
Jon Koncak Once Made More Money Than Michael Jordan and Larry Bird, Earning the Nickname Jon Contract - Sportscasting