Archive for the ‘Machine Learning’ Category

A secure approach to generative AI with AWS | Amazon Web Services – AWS Blog

Generative artificial intelligence (AI) is transforming the customer experience in industries across the globe. Customers are building generative AI applications using large language models (LLMs) and other foundation models (FMs), which enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels.

FMs and the applications built around them represent extremely valuable investments for our customers. Theyre often used with highly sensitive business data, like personal data, compliance data, operational data, and financial information, to optimize the models output. The biggest concern we hear from customers as they explore the advantages of generative AI is how to protect their highly sensitive data and investments. Because their data and model weights are incredibly valuable, customers require them to stay protected, secure, and private, whether thats from their own administrators accounts, their customers, vulnerabilities in software running in their own environments, or even their cloud service provider from having access.

At AWS, our top priority is safeguarding the security and confidentiality of our customers workloads. We think about security across the three layers of our generative AI stack:

Each layer is important to making generative AI pervasive and transformative.

With the AWS Nitro System, we delivered a first-of-its-kind innovation on behalf of our customers. The Nitro System is an unparalleled computing backbone for AWS, with security and performance at its core. Its specialized hardware and associated firmware are designed to enforce restrictions so that nobody, including anyone in AWS, can access your workloads or data running on your Amazon Elastic Compute Cloud (Amazon EC2) instances. Customers have benefited from this confidentiality and isolation from AWS operators on all Nitro-based EC2 instances since 2017.

By design, there is no mechanism for any Amazon employee to access a Nitro EC2 instance that customers use to run their workloads, or to access data that customers send to a machine learning (ML) accelerator or GPU. This protection applies to all Nitro-based instances, including instances with ML accelerators like AWS Inferentia and AWS Trainium, and instances with GPUs like P4, P5, G5, and G6.

The Nitro System enables Elastic Fabric Adapter (EFA), which uses the AWS-built AWS Scalable Reliable Datagram (SRD) communication protocol for cloud-scale elastic and large-scale distributed training, enabling the only always-encrypted Remote Direct Memory Access (RDMA) capable network. All communication through EFA is encrypted with VPC encryption without incurring any performance penalty.

The design of the Nitro System has been validated by the NCC Group, an independent cybersecurity firm. AWS delivers a high level of protection for customer workloads, and we believe this is the level of security and confidentiality that customers should expect from their cloud provider. This level of protection is so critical that weve added it in our AWS Service Terms to provide an additional assurance to all of our customers.

From day one, AWS AI infrastructure and services have had built-in security and privacy features to give you control over your data. As customers move quickly to implement generative AI in their organizations, you need to know that your data is being handled securely across the AI lifecycle, including data preparation, training, and inferencing. The security of model weightsthe parameters that a model learns during training that are critical for its ability to make predictionsis paramount to protecting your data and maintaining model integrity.

This is why it is critical for AWS to continue to innovate on behalf of our customers to raise the bar on security across each layer of the generative AI stack. To do this, we believe that you must have security and confidentiality built in across each layer of the generative AI stack. You need to be able to secure the infrastructure to train LLMs and other FMs, build securely with tools to run LLMs and other FMs, and run applications that use FMs with built-in security and privacy that you can trust.

At AWS, securing AI infrastructure refers to zero access to sensitive AI data, such as AI model weights and data processed with those models, by any unauthorized person, either at the infrastructure operator or at the customer. Its comprised of three key principles:

The Nitro System fulfills the first principle of Secure AI Infrastructure by isolating your AI data from AWS operators. The second principle provides you with a way to remove administrative access of your own users and software to your AI data. AWS not only offers you a way to achieve that, but we also made it straightforward and practical by investing in building an integrated solution between AWS Nitro Enclaves and AWS Key Management Service (AWS KMS). With Nitro Enclaves and AWS KMS, you can encrypt your sensitive AI data using keys that you own and control, store that data in a location of your choice, and securely transfer the encrypted data to an isolated compute environment for inferencing. Throughout this entire process, the sensitive AI data is encrypted and isolated from your own users and software on your EC2 instance, and AWS operators cannot access this data. Use cases that have benefited from this flow include running LLM inferencing in an enclave. Until today, Nitro Enclaves operate only in the CPU, limiting the potential for larger generative AI models and more complex processing.

We announced our plans to extend this Nitro end-to-end encrypted flow to include first-class integration with ML accelerators and GPUs, fulfilling the third principle. You will be able to decrypt and load sensitive AI data into an ML accelerator for processing while providing isolation from your own operators and verified authenticity of the application used for processing the AI data. Through the Nitro System, you can cryptographically validate your applications to AWS KMS and decrypt data only when the necessary checks pass. This enhancement allows AWS to offer end-to-end encryption for your data as it flows through generative AI workloads.

We plan to offer this end-to-end encrypted flow in the upcoming AWS-designed Trainium2 as well as GPU instances based on NVIDIAs upcoming Blackwell architecture, which both offer secure communications between devices, the third principle of Secure AI Infrastructure. AWS and NVIDIA are collaborating closely to bring a joint solution to market, including NVIDIAs new NVIDIA Blackwell GPU platform, which couples NVIDIAs GB200 NVL72 solution with the Nitro System and EFA technologies to provide an industry-leading solution for securely building and deploying next-generation generative AI applications.

Today, tens of thousands of customers are using AWS to experiment and move transformative generative AI applications into production. Generative AI workloads contain highly valuable and sensitive data that needs the level of protection from your own operators and the cloud service provider. Customers using AWS Nitro-based EC2 instances have received this level of protection and isolation from AWS operators since 2017, when we launched our innovative Nitro System.

At AWS, were continuing that innovation as we invest in building performant and accessible capabilities to make it practical for our customers to secure their generative AI workloads across the three layers of the generative AI stack, so that you can focus on what you do best: building and extending the uses of the generative AI to more areas. Learn more here.

Anthony Liguori is an AWS VP and Distinguished Engineer for EC2

Colm MacCrthaigh is an AWS VP and Distinguished Engineer for EC2

Continued here:
A secure approach to generative AI with AWS | Amazon Web Services - AWS Blog

Tags:

Imbalanced Learn: the Python library for rebuilding ML datasets – DataScientest

As mentioned earlier, one of the great advantages of Imbalanced Learn is its native integration with scikit-learn: a Python library commonly used for machine learning.

This integration makes it very easy for users to incorporate Imbalanced Learns functionality into their learning pipelines, combining resampling techniques with scikit-learn estimators to build robust and balanced models.

In addition, it can also be integrated with other Machine Learning frameworks and tools such as TensorFlow, PyTorch, and other popular libraries.

This broad compatibility allows users to exploit the advanced functionality in a variety of environments and architectures, offering greatly increased flexibility and adaptability.

Machine Learning researchers and engineers are able to apply advanced resampling techniques in areas such as computer vision, natural language processing, and other applications requiring deep neural network architectures.

With the move towards distributed architectures and edge computing environments, the integration of Imbalanced Learn into cloud and edge solutions has also become essential.

Compatible libraries and Kubernetes orchestration tools can facilitate the deployment and management of balanced models, enabling efficient scaling and real-time execution in diverse and dynamic environments.

More:
Imbalanced Learn: the Python library for rebuilding ML datasets - DataScientest

Tags:

AI has a lot of terms. We’ve got a glossary for what you need to know – Quartz

Nvidia CEO Jensen Huang. Photo: Justin Sullivan ( Getty Images )

Lets start with the basics for a refresher. Generative artificial intelligence is a category of AI that uses data to create original content. In contrast, classic AI could only offer predictions based on data inputs, not brand new and unique answers using machine learning. But generative AI uses deep learning, a form of machine learning that uses artificial neural networks (software programs) resembling the human brain, so computers can perform human-like analysis.

Generative AI isnt grabbing answers out of thin air, though. Its generating answers based on data its trained on, which can include text, video, audio, and lines of code. Imagine, say, waking up from a coma, blindfolded, and all you can remember is 10 Wikipedia articles. All of your conversations with another person about what you know are based on those 10 Wikipedia articles. Its kind of like that except generative AI uses millions of such articles and a whole lot more.

View post:
AI has a lot of terms. We've got a glossary for what you need to know - Quartz

Tags:

Texxa AI, Where ideas take flight: Revolutionizing AI Solutions for Businesses and Individuals – GlobeNewswire

London-England , April 22, 2024 (GLOBE NEWSWIRE) -- Texxa AI stands at the forefront of artificial intelligence innovation, offering a comprehensive suite of cutting-edge solutions tailored to meet the diverse needs of businesses and individuals. Our platform leverages the latest advancements in natural language processing (NLP), machine learning (ML), computer vision, and other AI algorithms to deliver unparalleled capabilities in chatbot development, image generation, video editing, content personalization, and data analysis.

At Texxa AI, we are committed to democratizing access to advanced AI technology, empowering users from all backgrounds to leverage the power of AI for their unique applications. Whether you're a seasoned developer looking to create innovative solutions or a business seeking to streamline operations and enhance customer engagement, Texxa AI provides the tools and resources you need to succeed.

Texxa AI announces its Presale in a launch manner allowing different companies, institutions and persons from different works of life to participate in its use cases which is the ultimate vision of Texxa AI. Institutions would be investing huge in Texxa which will propel it to multi-billion dollar MarketCap at launch.

One of the key features of Texxa AI is its chatbot development capabilities. Our platform enables users to create sophisticated chatbots that can handle a wide range of customer inquiries, providing a seamless and efficient customer support experience. Texxa AI also offers powerful image generation, enhancement tools and advanced video editing capabilities, with content personalization and optimization capabilities allowing users to create stunning personalized and optimized visual content with ease. Data analysis and insights generation are also core features of Texxa AI.

With a live utility ( https://www.texxa.ai/app.html ) bankable key features, Texxa AI will be great. With an innovatively designed tokenomics model and a maximum supply of 10 million coins, Texxa AI embodies stability, security, and growth potential, positioning it as a cornerstone of AI. This is an opportunity for investors and crypto enthusiasts to get a new era of investment. By participating, the investors become integral contributors to Texxa, propelling the technological leverage of Artificial intelligence.

Over 1000 users and more than 20 companies are part of TEXXA AI at the moment. All payments, both in FIAT and in crypto, will be converted into TEXXA, allowing for constant, fast, and limitless token price growth! This will ensure a high currency usage and increase its value over time.

Texxa AI is a powerful and versatile live platform that offers a wide range of innovative solutions for businesses and individuals. This makes it a bankable investment for all, whether a novice or an expert. Texxa AI has the tools and capabilities you need to succeed in today's digital landscape and also as an investment.

To learn more about Texxa AI Website : https://www.texxa.ai Twitter X: https://x.com/TexxaAI Telegram: https://t.me/TexxaAI

Disclaimer:The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended you practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency and securities.

More:
Texxa AI, Where ideas take flight: Revolutionizing AI Solutions for Businesses and Individuals - GlobeNewswire

Tags:

Using machine learning to identify patients with cancer that would benefit from immunotherapy – Medical Xpress

Using machine learning to identify patients with cancer that would benefit from immunotherapy  Medical Xpress

Follow this link:
Using machine learning to identify patients with cancer that would benefit from immunotherapy - Medical Xpress

Tags: