Archive for the ‘Artificial Intelligence’ Category

ACS Receives BOA Award Providing Data Readiness for Artificial Intelligence Development (DRAID) for DoD Joint Artificial Intelligence Center (JAIC)…

RESTON, Va.--(BUSINESS WIRE)--Assured Consulting Solutions (ACS) is proud to announce receiving an award to the basic ordering agreement (BOA) providing Data Readiness for Artificial Intelligence Development (DRAID) for the DoD Joint Artificial Intelligence Center (JAIC) and the Chief Digital and Artificial Intelligence Office. This BOA is a decentralized vehicle that streamlines rapid procurement and agile delivery of AI data readiness capabilities for Defense AI initiatives. The streamlined methodologies implemented will benefit both industry and government partners by increasing competition and flexibility for each task order.

The successful use of AI depends critically on the availability of quality data that can be used to build reliable AI-enabled systems. The DRAID vehicle will address the entire data lifecycle, from collection, through pre-processing, up to before AI system creation. It will also support AI-specific requirements, including unique challenges in operationalizing data for AI. DRAID is also customizable; it enables the selection of a custom subset of AI data readiness services to meet individual needs. ACS will leverage our DeepGovernance practice and D2SAM approach to help DoD customers to prepare for rapid and agile AI technologies.

ABOUT ASSURED CONSULTING SOLUTIONS

Founded in 2011 and headquartered in Reston, Va., Assured Consulting Solutions is a well-respected and trusted partner, domain expert, and provider of expert-level support. ACS is a certified Woman-Owned Small Business (WOSB) that delivers advanced technology solutions and strategic support services in support of critical national security missions for Intelligence, Defense, and Federal Civilian customers. Learn more at http://www.assured-consulting.com.

ABOUT D2SAM

ACSs Data-Driven Secure Agile Methodology (D2SAM) is a framework of engineering and non-technical tools, processes, and techniques supported by an underlying model-based infrastructure, data environment, and process library. The D2SAM Framework is organized into four cyclical quadrants that reflect the continuous delivery of services and systems to customers. ACS envisions our customers being on a continual journey through strategy, design, transition, and operations (SDTO) cycles leading towards their future goals and operational outcomes. Learn more at https://www.assured-consulting.com/blog/2021/12/17/acs-announces-trademark-registration-of-d2sam

Excerpt from:
ACS Receives BOA Award Providing Data Readiness for Artificial Intelligence Development (DRAID) for DoD Joint Artificial Intelligence Center (JAIC)...

How Can Artificial Intelligence Help With Suicidal Ideation? – Theravive

A new study published in the Journal of Psychiatric Research looked at the performance of machine learning models in predicting suicidal ideation, attempts, and deaths.

My study sought to quantify the ability of existing machine learning models to predict future suicide-related events, study author Karen Kusuma told us. While there are other research studies examining a similar question, my study is the first to use clinically relevant and statistically appropriate performance measures for the machine learning studies.

The utility of artificial intelligence has been a controversial topic in psychiatry, and medicine overall. Some studies have demonstrated better performance with machine learning methods, while others have not. Kusuma began the study expecting that machine learning models would perform well.

Suicide is a leading cause of years of life lost across most of Europe, central Asia, southern Latin America, and Australia (Naghavi, 2019; Australian Bureau of Statistics, 2020), Kusuma told us. Standard clinical practice dictates that people seeking help for suicide-related issues need to be first administered with a suicide risk assessment. However, research has found that suicide risk predictions tend to be inaccurate.

Only five per cent of people ordinarily classified as high risk died by suicide, while around half of those who died by suicide would normally be categorised as low risk (Large, Ryan, Carter, & Kapur, 2017). Unfortunately, there has been no improvement in suicide prediction research in the last fifty years (Franklin et al., 2017).

Some researchers have claimed that machine learning will become an efficient and effective alternative to current suicide risk assessments (e.g. Fonseka et al., 2019), Kusuma told us, so I wanted to examine the potential of machine learning quantitatively, while evaluating the methodology currently used in the literature.

Researchers searched for relevant studies across four research databases and identified 56 relevant studies. From there, 54 models from 35 studies had sufficient data, and were included in the quantitative analyses.

We found that machine learning models achieved a very good overall performance according to clinical diagnostic standards, Kusuma told us. The models correctly predicted 66% of the people who would experience a suicide-related event (i.e. ideation, attempt, or death), and correctly predicted 87% of the people who would not experience a suicide-related event.

However, there was a high prevalence of risk of bias in the research, with many studies processing or analysing the data inappropriately. This isnt a finding specific to machine learning research, but a systemic issue caused largely by a publish-or-perish culture in academia.

I did expect machine learning models to do well, so I think this review establishes a good benchmark for future research, Kusuma told us. I do believe that this review shows the potential of machine learning to transform the future of suicide risk prediction. Automated suicide risk screening would be quicker and more consistent than current methods.

This could potentially identify many people at risk of suicide without them having to reach out proactively. However, researchers need to be careful to minimise data leakage, which would skew performance measures. Furthermore, many iterations of development and validation need to take place to ensure that the machine learning models can predict suicide risk in previously unseen populations.

Prior to deployment, researchers also need to ascertain if artificial intelligence would work in an equitable manner across people from different backgrounds, Kusuma told us. For example, a study has found their machine learning models performed better in predicting deaths by suicide in White patients, as opposed to Black and American Indian/ Alaskan Native patients (Coley et al., 2022).

That isnt to say that artificial intelligence is inherently discriminatory, Kusuma explained, but there is less data available for minorities, which often means lower performance in those populations. Its possible that models need to be developed and validated separately for people of different demographic characteristics.

Machine learning is an exciting innovation in suicide research, Kusuma told us. An improvement in suicide prediction abilities would mean that resources could be allocated to those who need them the most.

Categories: Depression , Stress , Suicide | Tags: suicide, depression, machine

Patricia Tomasi is a mom, maternal mental health advocate, journalist, and speaker. She writes regularly for the Huffington Post Canada,focusing primarily on maternal mental health after suffering from severe postpartum anxiety twice. You can find her Huffington Post biography here. Patricia is also a Patient Expert Advisor for the North American-based,Maternal Mental Health Research Collectiveand is the founder of the online peer support group -Facebook Postpartum Depression & Anxiety Support Group - with over 1500 members worldwide. Blog:www.patriciatomasiblog.wordpress.com Email:tomasi.patricia@gmail.com

More:
How Can Artificial Intelligence Help With Suicidal Ideation? - Theravive

Neurodiversity Emerges as a Skill in Artificial Intelligence Work – BNN Bloomberg

(Bloomberg) -- Staring closely at the screen, Jordan Wright deftly picks out a barely distinguishable shape with his mouse, bringing to life a stark blue outline from a blur of overexposed features.

Its a process similar to the automated tests that teach computers to distinguish humans from machines, by asking someone to identify traffic lights or stop signs in a picture known as a Captcha.

Only in Wrights case, the shape turns out to be of a Tupolev Tu-160, a supersonic strategic heavy bomber, parked on a Russian base. The outline one of hundreds a day he picks out from satellite imagesis training an algorithm so a US intelligence agency can locate and identify Moscows firepower in an automated flash.

Its become a run-of-the-mill task for the 25-year-old, who describes himself as on the autism spectrum. Starting in the spring, Wright began working atEnabled Intelligence, a Virginia-based startup that works largely for US intelligence and other federal agencies. Foundedin 2020, itspecializes in labeling, training and testing the sensitive digital data on which artificial intelligence depends.

Peter Kant, chief executive officer of Enabled Intelligence, said he was inspired to start the company after reading about an Israeli program to recruit people with autism for cyber-intelligence work. Therepetitive,detailedwork of training artificial intelligence algorithms relies on pattern recognition, puzzle-solving and deep focus that is sometimes a particular strength of autistic workers, he said.

Enabled Intelligences main type ofwork, known as data annotation, is usually farmed out to technically skilled but far cheaper labor forces in countries including China, Kenya andMalaysia. Thats not an option for US government agencies whose data is sensitive or classified, Kant said, adding that morethan half hisworkforce of 25 areneurodiverse.

I can easily say this is the best opportunity I've got in my life, said Wright, who grew up with an infatuation for military aviation, dropped out of college and has since experienced long stints of unemployment in between poorly paid work. Most recently, he baggedfrozen groceries.

For decades, workers with developmental disabilities, especially autism, have faced discrimination and disproportionately high unemployment levels. A large shortfall in cybersecurity jobs, along with a new push for workplace acceptance and flexibility in part spurred by the Covid-19 pandemic has started to focus attention onthe abilities of people who think and work differently.

Enabled Intelligence has adjusted its work rules to accommodate its employees, ditching resumes and interviews for online assessmentsand staggering work hours for those who find it hard to get in early. It has built three new areas for classified material and hopes to secure government clearances for much of its neurodiverse workforce something the US intelligence community has sometimes struggled to accommodate in the past.Pay starts at $20 an hour,in line with industry standards, and the company provideshealth insurance, paid leave and a path for promotion. Enabled Intelligenceexpects to make revenues of $2 millionthis year and double thatnext year, along with doubling its workforce.

The US intelligence community has been slow to catch on to the opportunity, critics say. It falls short of the 12% federal target for workforce representation of persons with disabilities, according to the latest statistics out this month. Until this year, it has also regularly fallen short of the 2% federal target for persons with targeted disabilities, which include those with autism.

In other countries its old hat, said Teresa Thomas, program lead for neurodiverse talent enablement at MITRE, which operates federally funded research and development centers. She citeswell established programs in Denmark, Israel, the UK and Australia, where one state recently appointed a minister for autism.

Thomas has recently spearheaded a new neurodiverse federal workforce pilot to establish a template for the US government to hire and support autistic workers, but so far only one of the countrys 18 intelligence agencies, the National Geospatial-Intelligence Agency, known asNGA,has participated.Now the federal governmentscyberdefense agency, the Cybersecurity and Infrastructure Security Agency,intends to undertake a similar pilot.

Stephanie La Rue, chief of diversity, equity and inclusion for the Office of the Director of National Intelligence, told Bloomberg the US intelligence community needs to acknowledge that its not where we need to bewhen it comes to employing people with disabilities.

Its like turning the Titanic, said La Rue, adding that NGAs four-person pilot would be reviewed and shared with the wider intelligence community as a promising practice. Change is going to be incremental.

Research indicated that neurodiverse intelligence officers on the autism spectrum exhibit the ability to parse large data sets and identify patterns and trends at rates that far exceed folks who are not autistic and were less prone to cognitive bias, La Rue said.Yet securing a clearance to access classified information can still present an additional challenge, according to some observers.

If an office wall board at Enabled Intelligenceis any indication, experiencesvary. There, 18 anonymous handwritten notesanswer the question: What does neurodiversity mean to you?

Difficult. Trying. Its held me back a lot, says one in an uncertain script. Strength,answers a second in careful cursive. A third, in capital letters, declares: SUPERPOWERS.

2022 Bloomberg L.P.

Original post:
Neurodiversity Emerges as a Skill in Artificial Intelligence Work - BNN Bloomberg

Artificially Intelligent? The Pros and Cons of Using AI Content on Your Law Firms Website – JD Supra

Artificial intelligence (AI) is powerfuland the use of it for content generation is on the rise. In fact, some experts estimate that as much as 90 percent of online content may be generated by AI algorithms by 2026.

Many of the popular AI content generators produce well-written, informative content. But is it the right choice for your firm? Before you decide, lets consider the pros and cons of using this unique sort of copy with your digital marketing.

This article explains how AI content generators works, the pros and cons of AI-generated content, and a few tips for utilizing AI content in your digital marketing workflow.

Consumer-facing artificial intelligence tools are pretty straightforward, as far as the consumer is concerned. You provide some inputs, and the machine provides some outputs.

Heres how it works with content writing. You generally provide the AI generator with a topic and keywords. You can usually select the format youd like the output to take, such as a blog post or teaser copy. Then, its as simple as clicking GO.

The content generator will scrape the web and draft copy for your needs. Some tools can take existing content and rewrite it, which can make content marketing a lot easier.

Not all AI content generators cost money, but youll need to pay something to access the better toolsor to produce a lot of content.

If youre excited about the possibilities, great! There are some significant benefits to AI content generators.

Here are a few pros of AI content tools:

To sum up, AI content tools can quickly produce natural-sounding copy at a fraction of the cost of paying a real copywriter.

There are several important drawbacks to consider with AI-generated content. Speed and cost arent everything when it comes to content generation.

Here are several cons that come with using AI content tools:

AI tools can be hit-or-miss when it comes to empathy and accuracy. Law firms should be very careful when publishing this type of content. There are also serious SEO concerns with using AI content.

Overall, its clear that AI-generated content can provide value. The question is how to best incorporate AI content into your digital marketing efforts.

Here are a few best practices if you choose to use AI-generated content.

All AI-generated content should be reviewed by a real human being prior to publication.We recommend hiring a legal professional to review and edit AI copy. A copywriter can help smooth the rough edges, too. Because the content is already written, the hourly rate youll pay these professionals should be minimal.

Dont Use AI-generated content on your website. This type of tool should be a last resort. If you do use machine-generated copy on your website, make sure to block it from being crawled to avoid search engine penalties. Your website developer can advise on the best way to do this.

Do not hire an agency that brags about AI content as a core strategy.SEO and web development companies should be very aware of the risks that come with using AI content. If they suggest AI-generated content, ask them how they plan to protect your firm against search engine penaltiesand dont work with them if they dont have a good answer.

Our current position is that AI-generated content can be helpful for short blurbs, such as newsletters to clients. All AI content should only be deployed with human oversight.

We recommend against using AI-generated content for website copy. If it must be used, its important to work with a developer or agency that understands how to communicate with search engines so you arent penalized for using AI tools.

[View source.]

Go here to see the original:
Artificially Intelligent? The Pros and Cons of Using AI Content on Your Law Firms Website - JD Supra

Oracle joins up with Nvidia to boost its artificial intelligence capabilities – The National

US software company Oracle announced a multiyear partnership with Nvidia a global leader in artificial intelligence hardware and software that designs and manufactures graphics processing units (GPUs) for various industries to boost its cloud infrastructure.

Under the partnership announced in parallel with the opening of the Oracle Cloud World event in Las Vegas, Nevada Oracle will use tens of thousands of Nvidia's GPUs to accelerate the pace of computing and AI advancements in its cloud infrastructure.

Following the announcement, Oracles stock was trading slightly up at $67.03 at 5.40pm New York time, while Nvidia was trading up at $119.67 a share.

The Texas-based company intends to bring the full Nvidia computing stack including GPUs, systems and software to Oracle Cloud Infrastructure (OCI).

GPUs can process various tasks simultaneously, making them useful for machine learning, video editing and gaming applications.

Nvidia is a global leader in AI hardware and software. Reuters

OCI is adding tens of thousands more Nvidia GPUs including the A100 and upcoming H100 to its capacity, Oracle said in a statement.

About a month ago, the US restricted Nvidia from exporting its A100 and H100 chips, designed to speed up machine-learning tasks, to China and Russia.

Combined with OCIs AI cloud infrastructure, cluster networking and storage, this partnership provides enterprises a broad, easily accessible portfolio of options for AI training and deep learning inference at scale, Oracle said.

To drive long-term success in todays business environment, organisations need answers and insight faster than ever, the company's chief executive Safra Catz said.

Our expanded alliance with Nvidia will deliver the best of both companies expertise to help customers across industries from health care and manufacturing to telecommunications and financial services overcome the multitude of challenges they face.

The Oracle and Nvidia partnership comes as more companies integrate AI and machine-learning tools to streamline their operations and as AI models become more complex.

The companies did not disclose the financial details of the deal.

US technology company Oracle announced a series of new cloud-focused products at Oracle Cloud World on Tuesday. Reuters

Accelerated computing and AI are key to tackling rising costs in every aspect of operating businesses, California-based Nvidias founder and chief executive Jensen Huang said.

Enterprises are increasingly turning to cloud-first AI strategies that enable fast development and scalable deployment. Our partnership with Oracle will put Nvidia AI within easy reach for thousands of companies.

The global AI market is expected to grow at an annual rate of more than 38 per cent from 2022 to 2030, from $93.5 billion last year, Grand Views Research reported.

AI will be the common theme in the top 10 technology trends in the next few years, and these are expected to quicken breakthroughs across key economic sectors and society, Alibaba Damo Academy the global research arm of Chinese company Alibaba Group said in a report.

Updated: October 18, 2022, 10:01 PM

Read more here:
Oracle joins up with Nvidia to boost its artificial intelligence capabilities - The National