Media Search:



Texxa AI, Where ideas take flight: Revolutionizing AI Solutions for Businesses and Individuals – GlobeNewswire

London-England , April 22, 2024 (GLOBE NEWSWIRE) -- Texxa AI stands at the forefront of artificial intelligence innovation, offering a comprehensive suite of cutting-edge solutions tailored to meet the diverse needs of businesses and individuals. Our platform leverages the latest advancements in natural language processing (NLP), machine learning (ML), computer vision, and other AI algorithms to deliver unparalleled capabilities in chatbot development, image generation, video editing, content personalization, and data analysis.

At Texxa AI, we are committed to democratizing access to advanced AI technology, empowering users from all backgrounds to leverage the power of AI for their unique applications. Whether you're a seasoned developer looking to create innovative solutions or a business seeking to streamline operations and enhance customer engagement, Texxa AI provides the tools and resources you need to succeed.

Texxa AI announces its Presale in a launch manner allowing different companies, institutions and persons from different works of life to participate in its use cases which is the ultimate vision of Texxa AI. Institutions would be investing huge in Texxa which will propel it to multi-billion dollar MarketCap at launch.

One of the key features of Texxa AI is its chatbot development capabilities. Our platform enables users to create sophisticated chatbots that can handle a wide range of customer inquiries, providing a seamless and efficient customer support experience. Texxa AI also offers powerful image generation, enhancement tools and advanced video editing capabilities, with content personalization and optimization capabilities allowing users to create stunning personalized and optimized visual content with ease. Data analysis and insights generation are also core features of Texxa AI.

With a live utility ( https://www.texxa.ai/app.html ) bankable key features, Texxa AI will be great. With an innovatively designed tokenomics model and a maximum supply of 10 million coins, Texxa AI embodies stability, security, and growth potential, positioning it as a cornerstone of AI. This is an opportunity for investors and crypto enthusiasts to get a new era of investment. By participating, the investors become integral contributors to Texxa, propelling the technological leverage of Artificial intelligence.

Over 1000 users and more than 20 companies are part of TEXXA AI at the moment. All payments, both in FIAT and in crypto, will be converted into TEXXA, allowing for constant, fast, and limitless token price growth! This will ensure a high currency usage and increase its value over time.

Texxa AI is a powerful and versatile live platform that offers a wide range of innovative solutions for businesses and individuals. This makes it a bankable investment for all, whether a novice or an expert. Texxa AI has the tools and capabilities you need to succeed in today's digital landscape and also as an investment.

To learn more about Texxa AI Website : https://www.texxa.ai Twitter X: https://x.com/TexxaAI Telegram: https://t.me/TexxaAI

Disclaimer:The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended you practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency and securities.

More:
Texxa AI, Where ideas take flight: Revolutionizing AI Solutions for Businesses and Individuals - GlobeNewswire

Machine Learning Helps Scientists Locate the Neurological Origin of Psychosis – ExtremeTech

Researchers in the United States, Chile, and the United Kingdom have leveraged machine learning to hone in on the parts of the brain responsible for psychosis. Their findings help to illuminate a common yet elusive experience and could contribute to the development of novel treatments for psychosis and the conditions that cause it.

Around 3 in every 100 people will experience at least one psychotic episode in their lifetimes. Commonly misunderstood, these episodes are characterized by hallucinations (a false perception involving the senses) or delusions (false beliefs not rooted in reality). Many people who experience psychosis have a condition like schizophrenia or bipolar disorder; others have a history of substance abuse, and still others have no particular condition at all.

Regardless of its cause, psychosis can be debilitating for those who experience it, leading some people to seek out antipsychotic medication aimed at staving off future episodes. Though antipsychotic medications are often a godsend for the people who take them, they've historically disrupted neurological psychosis research. During brain scans, it's difficult to know whether specific brain activity can be attributed to the person's condition or to the drugs they're taking. This means medical professionals and pharmaceutical companies work with a fairly limited understanding of psychosis as they help patients manage their episodes.

Researchers at Stanford University, the University of California Los Angeles, Universidad del Desarrollo, and the University of Oxford relied on two strategies to circumvent this issue. To start, they gathered study participants from a wide range of ages and conditions in the hope of uncovering an overarching theme. The group of nearly 900 participants included people ages 6 to 39, some of whom had a history of psychosis or schizophrenia and some of whom had never experienced either. Just over 100 participants had 22q11.2 deletion syndrome, meaning they're missing part of one of their copies of chromosome 22a condition known to carry a 30% risk of experiencing psychosis, schizophrenia, or both. Another 120 participants experienced psychosis but had not been diagnosed with any particular hallucination- or delusion-causing condition.

Credit: Supekar et al, Molecular Psychiatry/DOI 10.1038/s41380-024-02495-8

The team also used machine learning to spot the minute distinctions between the brain activity of those who experience psychosis and the brain activity of those who don't. To map out the participants' neurological activity, the team used functional magnetic resonance imaging (fMRI). This technique allows medical professionals and researchers to track the tiny fluctuations in blood flow triggered by brain changes.

With a custom spatiotemporal deep neural network (stDNN), the researchers compared the functional brain signatures of all participants and found among those with 22q11.2 deletion syndrome. Regardless of demographic, these participants experienced what appeared to be "malfunctions" in the anterior insula and the ventral striatum. These two parts of the brain are involved in humans' cognitive filters and reward predictors, respectively. The stDNN continued to find clear discrepancies between the anterior insulae and ventral striata of those who experienced psychosis and those who did not, further indicating that these two regions of the brain played a vital role in hallucinations and delusions.

These findings, shared Friday in a paper for Molecular Psychiatry, support a standing theory regarding the reliance of psychosis on malfunctioning cognitive filters. Scientists have long wondered whether, during a psychotic episode, the brain struggles to distinguish what's true from what isn't. This is a key function of the brain's salience network, which detects and assigns importance to incoming stimuli. When the salience network cannot work correctly, the brain might incorrectly assign importance and attention to the wrong stimuli, resulting in a hallucination or delusion.

Our discoveries underscore the importance of approaching people with psychosis with compassion, said Stanford neuroscientist and senior study author Dr. Vinod Menon in a statement. Menon and his colleague, psychiatrist Kaustubh Supekar, hope their findings will assist in the development of antipsychotic treatments, especially for those with schizophrenia.

Read the rest here:
Machine Learning Helps Scientists Locate the Neurological Origin of Psychosis - ExtremeTech

Slack delivers native and secure generative AI powered by Amazon SageMaker JumpStart | Amazon Web Services – AWS Blog

This post is co-authored by Jackie Rocca, VP of Product, AI at Slack

Slack is where work happens. Its the AI-powered platform for work that connects people, conversations, apps, and systems together in one place. With the newly launched Slack AIa trusted, native, generative artificial intelligence (AI) experience available directly in Slackusers can surface and prioritize information so they can find their focus and do their most productive work.

We are excited to announce that Slack, a Salesforce company, has collaborated with Amazon SageMaker JumpStart to power Slack AIs initial search and summarization features and provide safeguards for Slack to use large language models (LLMs) more securely. Slack worked with SageMaker JumpStart to host industry-leading third-party LLMs so that data is not shared with the infrastructure owned by third party model providers.

This keeps customer data in Slack at all times and upholds the same security practices and compliance standards that customers expect from Slack itself. Slack is also using Amazon SageMaker inference capabilities for advanced routing strategies to scale the solution to customers with optimal performance, latency, and throughput.

With Amazon SageMaker JumpStart, Slack can access state-of-the-art foundation models to power Slack AI, while prioritizing security and privacy. Slack customers can now search smarter, summarize conversations instantly, and be at their most productive.

Jackie Rocca, VP Product, AI at Slack

SageMaker JumpStart is a machine learning (ML) hub that can help accelerate your ML journey. With SageMaker JumpStart, you can evaluate, compare, and select foundation models (FMs) quickly based on predefined quality and responsibility metrics to perform tasks like article summarization and image generation. Pretrained models are fully customizable for your use case with your data, and you can effortlessly deploy them into production with the user interface or SDK. In addition, you can access prebuilt solutions to solve common use cases and share ML artifacts, including ML models and notebooks, within your organization to accelerate ML model building and deployment. None of your data is used to train the underlying models. All the data is encrypted and is never shared with third-party vendors so you can trust that your data remains private and confidential.

Check out the SageMaker JumpStart model page for available models.

Slack launched Slack AI to provide native generative AI capabilities so that customers can easily find and consume large volumes of information quickly, enabling them to get even more value out of their shared knowledge in Slack. For example, users can ask a question in plain language and instantly get clear and concise answers with enhanced search. They can catch up on channels and threads in one click with conversation summaries. And they can access personalized, daily digests of whats happening in select channels with the newly launched recaps.

Because trust is Slacks most important value, Slack AI runs on an enterprise-grade infrastructure they built on AWS, upholding the same security practices and compliance standards that customers expect. Slack AI is built for security-conscious customers and is designed to be secure by designcustomer data remains in-house, data is not used for LLM training purposes, and data remains siloed.

SageMaker JumpStart provides access to many LLMs, and Slack selects the right FMs that fit their use cases. Because these models are hosted on Slacks owned AWS infrastructure, data sent to models during invocation doesnt leave Slacks AWS infrastructure. In addition, to provide a secure solution, data sent for invoking SageMaker models is encrypted in transit. The data sent to SageMaker JumpStart endpoints for invoking models is not used to train base models. SageMaker JumpStart allows Slack to support high standards for security and data privacy, while also using state-of-the-art models that help Slack AI perform optimally for Slack customers.

SageMaker JumpStart endpoints serving Slack business applications are powered by AWS instances. SageMaker supports a wide range of instance types for model deployment, which allows Slack to pick the instance that is best suited to support latency and scalability requirements of Slack AI use cases. Slack AI has access to multi-GPU based instances to host their SageMaker JumpStart models. Multiple GPU instances allow each instance backing Slack AIs endpoint to host multiple copies of a model. This helps improve resource utilization and reduce model deployment cost. For more information, refer to Amazon SageMaker adds new inference capabilities to help reduce foundation model deployment costs and latency.

The following diagram illustrates the solution architecture.

To use the instances most effectively and support the concurrency and latency requirements, Slack used SageMaker-offered routing strategies with their SageMaker endpoints. By default, a SageMaker endpoint uniformly distributes incoming requests to ML instances using a round-robin algorithm routing strategy called RANDOM. However, with generative AI workloads, requests and responses can be extremely variable, and its desirable to load balance by considering the capacity and utilization of the instance rather than random load balancing. To effectively distribute requests across instances backing the endpoints, Slack uses the LEAST_OUTSTANDING_REQUESTS (LAR) routing strategy. This strategy routes requests to the specific instances that have more capacity to process requests instead of randomly picking any available instance. The LAR strategy provides more uniform load balancing and resource utilization. As a result, Slack AI noticed over a 39% latency decrease in their p95 latency numbers when enabling LEAST_OUTSTANDING_REQUESTS compared to RANDOM.

For more details on SageMaker routing strategies, see Minimize real-time inference latency by using Amazon SageMaker routing strategies.

Slack is delivering native generative AI capabilities that will help their customers be more productive and easily tap into the collective knowledge thats embedded in their Slack conversations. With fast access to a large selection of FMs and advanced load balancing capabilities that are hosted in dedicated instances through SageMaker JumpStart, Slack AI is able to provide rich generative AI features in a more robust and quicker manner, while upholding Slacks trust and security standards.

Learn more about SageMaker JumpStart, Slack AI and how the Slack team built Slack AI to be secure and private. Leave your thoughts and questions in the comments section.

Jackie Rocca is VP of Product at Slack, where she oversees the vision and execution of Slack AI, which brings generative AI natively and securely into Slacks user experience. Now shes on a mission to help customers accelerate their productivity and get even more value out of their conversations, data, and collective knowledge with generative AI. Prior to her time at Slack, Jackie was a Product Manager at Google for more than six years, where she helped launch and grow Youtube TV. Jackie is based in the San Francisco Bay Area.

Rachna Chadha is a Principal Solutions Architect AI/ML in Strategic Accounts at AWS. Rachna is an optimist who believes that the ethical and responsible use of AI can improve society in the future and bring economic and social prosperity. In her spare time, Rachna likes spending time with her family, hiking, and listening to music.

Marc Karp is an ML Architect with the Amazon SageMaker Service team. He focuses on helping customers design, deploy, and manage ML workloads at scale. In his spare time, he enjoys traveling and exploring new places.

Maninder (Mani) Kaur is the AI/ML Specialist lead for Strategic ISVs at AWS. With her customer-first approach, Mani helps strategic customers shape their AI/ML strategy, fuel innovation, and accelerate their AI/ML journey. Mani is a firm believer of ethical and responsible AI, and strives to ensure that her customers AI solutions align with these principles.

Gene Ting is a Principal Solutions Architect at AWS. He is focused on helping enterprise customers build and operate workloads securely on AWS. In his free time, Gene enjoys teaching kids technology and sports, as well as following the latest on cybersecurity.

Alan Tan is a Senior Product Manager with SageMaker, leading efforts on large model inference. Hes passionate about applying machine learning to the area of analytics. Outside of work, he enjoys the outdoors.

Here is the original post:
Slack delivers native and secure generative AI powered by Amazon SageMaker JumpStart | Amazon Web Services - AWS Blog

DeepMind CEO says Google to spend more than $100B on AGI despite hype – Cointelegraph

Googles not backing down from the challenge posed by Microsoft when it comes to the artificial intelligence sector. At least not according to the CEO of Google DeepMind, Demis Hassabis.

Speaking at a TED conference in Canada, Hassabis recently went on the record saying that he expected Google to spend more than $100 billion on the development of artificial general intelligence (AGI) over time. His comments reportedly came in response to a question concerning Microsofts recent Stargate announcement.

Microsoft and OpenAI are reportedly in discussions to build a $100 billion supercomputer project for the purpose of training AI systems. According to the Intercept, a person wishing to remain anonymous, who has had direct conversations with OpenAI CEO Sam Altman and seen the initial cost estimates on the project, says its currently being discussed under the codename Stargate.

To put the proposed costs into perspective, the worlds most powerful supercomputer, the U.S.-based Frontier system, cost approximately $600 million to build.

According to the report, Stargate wouldnt be a single system similar to Frontier. It will instead spread out a series of computers across the U.S. in five phases with the last phase being the penultimate Stargate system.

Hassabis comments dont hint at exactly how Google might respond, but seemingly confirm the notion that the company is aware of Microsoft's endeavors and plans on investing just as much, if not more.

Ultimately, the stakes are simple. Both companies are vying to become the first organization to develop artificial general intelligence (AGI). Todays AI systems are constrained by their training methods and data and, as such, fall well short of human-level intelligence across myriad benchmarks.

AGI is a nebulous term for an AI system theoretically capable of doing anything an average adult human could do, given the right resources. An AGI system with access to a line of credit or a cryptocurrency wallet and the internet, for example, should be able to start and run its own business.

Related: DeepMind co-founder says AI will be able to invent, market, run businesses by 2029

The main challenge to being the first company to develop AGI is that theres no scientific consensus on exactly what an AGI is or how one could be created.

Even among the worlds most famous AI scientists Metas Yann LeCun, Googles Demis Hassabis, etc. there exists no small amount of disagreement as to whether AGI can even be achieved using the current brute force method of increasing datasets and training parameters, or if it can be achieved at all.

In a Financial Times article published in March, Hassabis made a negative comparison to the current AI/AGI hype cycle and the scams its attracted to the cryptocurrency market. Despite the hype, both AI and crypto have exploded their respective financial spaces in the first four months of 2024.

Where Bitcoin, the worlds most popular cryptocurrency sat at about $30,395 per coin in April of 2023, its now over $60,000 as of the time of this articles publishing, having only recently retreated from an all-time-high about $73K.

Meanwhile, the current AI industry leader, Microsoft, has seen its stock go from $286 a share to around $416 in the same time period.

Continued here:

DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph

Congressional panel outlines five guardrails for AI use in House – FedScoop

A House panel has outlined five guardrails for deployment of artificial intelligence tools in the chamber, providing more detailed guidance as lawmakers and staff explore the technology.

The Committee on House Administration released the guardrails in a flash report on Wednesday, along with an update on the committees work exploring AI in the legislative branch. The guardrails are human oversight and decision-making; clear and comprehensive policies; robust testing and evaluation; transparency and disclosure; and education and upskilling.

These are intended to be general, so that many House Offices can independently apply them to a wide variety of different internal policies, practices, and procedures, the report said. House Committees and Member Offices can use these to inform their internal AI practices. These are intended to be applied to any AI tool or technology in use in the House.

The report comes as the committee and its Subcommittee on Modernization have focused on AI strategy and implementation in the House, and is the fifth such document it has put out since September 2023.

According to the report, the guardrails are a product of a roundtable the committee held in March that included participants such as the National Institute of Standards and Technologys Elham Tabassi, the Defense Departments John Turner, the Federation of American Scientists Jennifer Pahlka, the House chief administrative officer, the clerk of the House, and senior staff from lawmakers offices.

The roundtable represented the first known instance of elected officials directly discussing AIs use in parliamentary operations, the report said. The report added that templates for the discussion were also shared with the think tank Bssola Tech, which works on modernization of parliaments and legislatures.

Already, members of Congress are experimenting with AI tools for things like research assistance and drafting, though use doesnt appear widespread. Meanwhile, both chambers have introduced policies to rein in use. In the House, the CAO has approved only ChatGPT Plus, while the Senate has allowed use of ChatGPT, Microsoft Bing Chat, and Google Bard with specific guardrails.

Interestingly, AI was used in the drafting of the committees report, modeling the transparency guardrail the committee outlined. A footnote in the document discloses that early drafts of this document were written by humans. An AI tool was used in the middle of the drafting process to research editorial clarity and succinctness. Subsequent reviews and approvals were human.

Here is the original post:

Congressional panel outlines five guardrails for AI use in House - FedScoop