Media Search:



Slack delivers native and secure generative AI powered by Amazon SageMaker JumpStart | Amazon Web Services – AWS Blog

This post is co-authored by Jackie Rocca, VP of Product, AI at Slack

Slack is where work happens. Its the AI-powered platform for work that connects people, conversations, apps, and systems together in one place. With the newly launched Slack AIa trusted, native, generative artificial intelligence (AI) experience available directly in Slackusers can surface and prioritize information so they can find their focus and do their most productive work.

We are excited to announce that Slack, a Salesforce company, has collaborated with Amazon SageMaker JumpStart to power Slack AIs initial search and summarization features and provide safeguards for Slack to use large language models (LLMs) more securely. Slack worked with SageMaker JumpStart to host industry-leading third-party LLMs so that data is not shared with the infrastructure owned by third party model providers.

This keeps customer data in Slack at all times and upholds the same security practices and compliance standards that customers expect from Slack itself. Slack is also using Amazon SageMaker inference capabilities for advanced routing strategies to scale the solution to customers with optimal performance, latency, and throughput.

With Amazon SageMaker JumpStart, Slack can access state-of-the-art foundation models to power Slack AI, while prioritizing security and privacy. Slack customers can now search smarter, summarize conversations instantly, and be at their most productive.

Jackie Rocca, VP Product, AI at Slack

SageMaker JumpStart is a machine learning (ML) hub that can help accelerate your ML journey. With SageMaker JumpStart, you can evaluate, compare, and select foundation models (FMs) quickly based on predefined quality and responsibility metrics to perform tasks like article summarization and image generation. Pretrained models are fully customizable for your use case with your data, and you can effortlessly deploy them into production with the user interface or SDK. In addition, you can access prebuilt solutions to solve common use cases and share ML artifacts, including ML models and notebooks, within your organization to accelerate ML model building and deployment. None of your data is used to train the underlying models. All the data is encrypted and is never shared with third-party vendors so you can trust that your data remains private and confidential.

Check out the SageMaker JumpStart model page for available models.

Slack launched Slack AI to provide native generative AI capabilities so that customers can easily find and consume large volumes of information quickly, enabling them to get even more value out of their shared knowledge in Slack. For example, users can ask a question in plain language and instantly get clear and concise answers with enhanced search. They can catch up on channels and threads in one click with conversation summaries. And they can access personalized, daily digests of whats happening in select channels with the newly launched recaps.

Because trust is Slacks most important value, Slack AI runs on an enterprise-grade infrastructure they built on AWS, upholding the same security practices and compliance standards that customers expect. Slack AI is built for security-conscious customers and is designed to be secure by designcustomer data remains in-house, data is not used for LLM training purposes, and data remains siloed.

SageMaker JumpStart provides access to many LLMs, and Slack selects the right FMs that fit their use cases. Because these models are hosted on Slacks owned AWS infrastructure, data sent to models during invocation doesnt leave Slacks AWS infrastructure. In addition, to provide a secure solution, data sent for invoking SageMaker models is encrypted in transit. The data sent to SageMaker JumpStart endpoints for invoking models is not used to train base models. SageMaker JumpStart allows Slack to support high standards for security and data privacy, while also using state-of-the-art models that help Slack AI perform optimally for Slack customers.

SageMaker JumpStart endpoints serving Slack business applications are powered by AWS instances. SageMaker supports a wide range of instance types for model deployment, which allows Slack to pick the instance that is best suited to support latency and scalability requirements of Slack AI use cases. Slack AI has access to multi-GPU based instances to host their SageMaker JumpStart models. Multiple GPU instances allow each instance backing Slack AIs endpoint to host multiple copies of a model. This helps improve resource utilization and reduce model deployment cost. For more information, refer to Amazon SageMaker adds new inference capabilities to help reduce foundation model deployment costs and latency.

The following diagram illustrates the solution architecture.

To use the instances most effectively and support the concurrency and latency requirements, Slack used SageMaker-offered routing strategies with their SageMaker endpoints. By default, a SageMaker endpoint uniformly distributes incoming requests to ML instances using a round-robin algorithm routing strategy called RANDOM. However, with generative AI workloads, requests and responses can be extremely variable, and its desirable to load balance by considering the capacity and utilization of the instance rather than random load balancing. To effectively distribute requests across instances backing the endpoints, Slack uses the LEAST_OUTSTANDING_REQUESTS (LAR) routing strategy. This strategy routes requests to the specific instances that have more capacity to process requests instead of randomly picking any available instance. The LAR strategy provides more uniform load balancing and resource utilization. As a result, Slack AI noticed over a 39% latency decrease in their p95 latency numbers when enabling LEAST_OUTSTANDING_REQUESTS compared to RANDOM.

For more details on SageMaker routing strategies, see Minimize real-time inference latency by using Amazon SageMaker routing strategies.

Slack is delivering native generative AI capabilities that will help their customers be more productive and easily tap into the collective knowledge thats embedded in their Slack conversations. With fast access to a large selection of FMs and advanced load balancing capabilities that are hosted in dedicated instances through SageMaker JumpStart, Slack AI is able to provide rich generative AI features in a more robust and quicker manner, while upholding Slacks trust and security standards.

Learn more about SageMaker JumpStart, Slack AI and how the Slack team built Slack AI to be secure and private. Leave your thoughts and questions in the comments section.

Jackie Rocca is VP of Product at Slack, where she oversees the vision and execution of Slack AI, which brings generative AI natively and securely into Slacks user experience. Now shes on a mission to help customers accelerate their productivity and get even more value out of their conversations, data, and collective knowledge with generative AI. Prior to her time at Slack, Jackie was a Product Manager at Google for more than six years, where she helped launch and grow Youtube TV. Jackie is based in the San Francisco Bay Area.

Rachna Chadha is a Principal Solutions Architect AI/ML in Strategic Accounts at AWS. Rachna is an optimist who believes that the ethical and responsible use of AI can improve society in the future and bring economic and social prosperity. In her spare time, Rachna likes spending time with her family, hiking, and listening to music.

Marc Karp is an ML Architect with the Amazon SageMaker Service team. He focuses on helping customers design, deploy, and manage ML workloads at scale. In his spare time, he enjoys traveling and exploring new places.

Maninder (Mani) Kaur is the AI/ML Specialist lead for Strategic ISVs at AWS. With her customer-first approach, Mani helps strategic customers shape their AI/ML strategy, fuel innovation, and accelerate their AI/ML journey. Mani is a firm believer of ethical and responsible AI, and strives to ensure that her customers AI solutions align with these principles.

Gene Ting is a Principal Solutions Architect at AWS. He is focused on helping enterprise customers build and operate workloads securely on AWS. In his free time, Gene enjoys teaching kids technology and sports, as well as following the latest on cybersecurity.

Alan Tan is a Senior Product Manager with SageMaker, leading efforts on large model inference. Hes passionate about applying machine learning to the area of analytics. Outside of work, he enjoys the outdoors.

Here is the original post:
Slack delivers native and secure generative AI powered by Amazon SageMaker JumpStart | Amazon Web Services - AWS Blog

DeepMind CEO says Google to spend more than $100B on AGI despite hype – Cointelegraph

Googles not backing down from the challenge posed by Microsoft when it comes to the artificial intelligence sector. At least not according to the CEO of Google DeepMind, Demis Hassabis.

Speaking at a TED conference in Canada, Hassabis recently went on the record saying that he expected Google to spend more than $100 billion on the development of artificial general intelligence (AGI) over time. His comments reportedly came in response to a question concerning Microsofts recent Stargate announcement.

Microsoft and OpenAI are reportedly in discussions to build a $100 billion supercomputer project for the purpose of training AI systems. According to the Intercept, a person wishing to remain anonymous, who has had direct conversations with OpenAI CEO Sam Altman and seen the initial cost estimates on the project, says its currently being discussed under the codename Stargate.

To put the proposed costs into perspective, the worlds most powerful supercomputer, the U.S.-based Frontier system, cost approximately $600 million to build.

According to the report, Stargate wouldnt be a single system similar to Frontier. It will instead spread out a series of computers across the U.S. in five phases with the last phase being the penultimate Stargate system.

Hassabis comments dont hint at exactly how Google might respond, but seemingly confirm the notion that the company is aware of Microsoft's endeavors and plans on investing just as much, if not more.

Ultimately, the stakes are simple. Both companies are vying to become the first organization to develop artificial general intelligence (AGI). Todays AI systems are constrained by their training methods and data and, as such, fall well short of human-level intelligence across myriad benchmarks.

AGI is a nebulous term for an AI system theoretically capable of doing anything an average adult human could do, given the right resources. An AGI system with access to a line of credit or a cryptocurrency wallet and the internet, for example, should be able to start and run its own business.

Related: DeepMind co-founder says AI will be able to invent, market, run businesses by 2029

The main challenge to being the first company to develop AGI is that theres no scientific consensus on exactly what an AGI is or how one could be created.

Even among the worlds most famous AI scientists Metas Yann LeCun, Googles Demis Hassabis, etc. there exists no small amount of disagreement as to whether AGI can even be achieved using the current brute force method of increasing datasets and training parameters, or if it can be achieved at all.

In a Financial Times article published in March, Hassabis made a negative comparison to the current AI/AGI hype cycle and the scams its attracted to the cryptocurrency market. Despite the hype, both AI and crypto have exploded their respective financial spaces in the first four months of 2024.

Where Bitcoin, the worlds most popular cryptocurrency sat at about $30,395 per coin in April of 2023, its now over $60,000 as of the time of this articles publishing, having only recently retreated from an all-time-high about $73K.

Meanwhile, the current AI industry leader, Microsoft, has seen its stock go from $286 a share to around $416 in the same time period.

Continued here:

DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph

Congressional panel outlines five guardrails for AI use in House – FedScoop

A House panel has outlined five guardrails for deployment of artificial intelligence tools in the chamber, providing more detailed guidance as lawmakers and staff explore the technology.

The Committee on House Administration released the guardrails in a flash report on Wednesday, along with an update on the committees work exploring AI in the legislative branch. The guardrails are human oversight and decision-making; clear and comprehensive policies; robust testing and evaluation; transparency and disclosure; and education and upskilling.

These are intended to be general, so that many House Offices can independently apply them to a wide variety of different internal policies, practices, and procedures, the report said. House Committees and Member Offices can use these to inform their internal AI practices. These are intended to be applied to any AI tool or technology in use in the House.

The report comes as the committee and its Subcommittee on Modernization have focused on AI strategy and implementation in the House, and is the fifth such document it has put out since September 2023.

According to the report, the guardrails are a product of a roundtable the committee held in March that included participants such as the National Institute of Standards and Technologys Elham Tabassi, the Defense Departments John Turner, the Federation of American Scientists Jennifer Pahlka, the House chief administrative officer, the clerk of the House, and senior staff from lawmakers offices.

The roundtable represented the first known instance of elected officials directly discussing AIs use in parliamentary operations, the report said. The report added that templates for the discussion were also shared with the think tank Bssola Tech, which works on modernization of parliaments and legislatures.

Already, members of Congress are experimenting with AI tools for things like research assistance and drafting, though use doesnt appear widespread. Meanwhile, both chambers have introduced policies to rein in use. In the House, the CAO has approved only ChatGPT Plus, while the Senate has allowed use of ChatGPT, Microsoft Bing Chat, and Google Bard with specific guardrails.

Interestingly, AI was used in the drafting of the committees report, modeling the transparency guardrail the committee outlined. A footnote in the document discloses that early drafts of this document were written by humans. An AI tool was used in the middle of the drafting process to research editorial clarity and succinctness. Subsequent reviews and approvals were human.

Here is the original post:

Congressional panel outlines five guardrails for AI use in House - FedScoop

The Potential and Perils of Advanced Artificial General Intelligence – elblog.pl

Artificial General Intelligence (AGI) presents a new frontier in the evolution of machine capabilities. In essence, AGI stands as a level of artificial intelligence where machines are equipped to tackle any intellectual task that a human being can perform. Unlike narrow AI that excels in specific tasks such as image recognition or weather forecasting, AGI stretches its capacity to learning, self-improvement, and adaptability across various situations, emulating human-like intellect.

The development and application of AGI is a double-edged sword. The technology holds promise for immense societal benefits, such as resolving intricate problems, enhancing the quality of life, and offering support across sectors including healthcare, scientific research, and resource management.

On the flip side, the rise of AGI comes with significant risks and challenges. Theres a tangible fear that uncontrolled AGI could become overpowering and autonomous, making decisions that might lead to dire consequences for humanity. AGIs efficiency in performing tasks could also result in job displacements across numerous professions. Furthermore, albeit AGI could lead to the creation of powerful information systems, it may simultaneously raise concerns regarding data security and privacy.

Its clear that while AGI harbors the potential for tremendous advantages, it is essential for society to carefully weigh and prepare for the potential risks and challenges that may arise from its advancement and utilization.

The Ethical and Moral Implications of AGI are substantial. As we imbue machines with human-like intelligence, questions arise about the rights of these intelligent systems, and how they fit into our moral and legal frameworks. There is an ongoing debate concerning whether AGIs should be granted personhood or legal protections, similar to those afforded to humans and animals.

Control and Alignment Issues with AGI pose critical challenges. Ensuring that AGI systems act in ways that are aligned with human values and do not diverge from intended goals is a complex problem known as the alignment problem. Researchers are working on developing safety measures to ensure that AGIs remain under human control and are beneficial rather than detrimental.

Advantages of AGI: Problem Solving: AGI can potentially solve complex issues that are beyond human capability, including those relating to climate change, medicine, and logistics. Acceleration of Innovation: AGI may dramatically speed up the pace of scientific and technological discovery, leading to rapid advancements in various fields. Efficiency and Cost Savings: By automating tasks, AGI can increase efficiency and reduce costs, making goods and services more affordable and accessible.

Disadvantages of AGI: Job Displacement: AGI could automate jobs across many sectors, leading to mass unemployment and economic disruption. Safety and Security: The difficulty in predicting the behavior of AGI systems makes them a potential risk to global security, and AGI could be utilized for malicious purposes if not properly regulated. Loss of Human Skills: Over-reliance on AGI could lead to the degradation of human skills and knowledge.

Most Important Questions regarding AGI: 1. How can we ensure that AGI will align with human values? Developing robust ethical frameworks and control mechanisms is crucial. 2. What are the implications of AGI for employment and the workforce? Proactive strategies are necessary to address job displacement, including retraining and education. 3. How can we protect against the misuse of AGI? International cooperation and regulation are key to prevent the weaponization or malicious use of AGI.

Key Controversies: Regulation: There is debate over what forms of regulation are appropriate for AGI to encourage innovation while ensuring safety. Accessibility: Concerns exist about who should have access to AGI technology and whether it could exacerbate inequality. Economic Impact: The potential transformation of the job market and economy by AGI is contested, with differing views on how to approach the transition.

For more information on AI and related topics, you can visit the following links: DeepMind OpenAI Future of Life Institute

These links direct you to organizations actively involved in the development and research of advanced AI technologies and their implications.

Read this article:

The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl

DeepMind Head: Google AI Spending Could Exceed $100 Billion – PYMNTS.com

Googles top AI executive says the companys spending on the technology will surpass $100 billion.

While speaking Monday (April 15) at a TED Conference in Vancouver, DeepMind CEO Demis Hassabis was asked about recent reports of Microsoft and OpenAIs planned artificial intelligence (AI) supercomputer known as Stargate,said to cost $100 billion.

We dont talk about our specific numbers, but I think were investing more than that over time, said Hassabis, whose comments were reported by Bloomberg News.

Hassabis, who co-founded DeepMind in 2010 before it was bought by Google four years later, did not offer further details on the potential AI investment, the report said. He also told the audience Googles computer power surpasses that of competitors like Microsoft.

Thats one of the reasons we teamed up with Google back in 2014, is we knew that in order to get to AGI we would need a lot of compute, he said, referring to artificial general intelligence, or AI that surpasses the intelligence of humans.

Thats whats transpired, he said. And Google had and still has the most computers.

Hassabis added that the massive interest kicked off by OpenAIs ChatGPT AI model demonstrated the public was ready for the technology, even if AI systems are still prone to errors.

As PYMNTS wrote earlier this month, the Stargate project spotlights the increasing role of AI in fueling innovation and determining the future of commerce. Experts believe that as tech giants invest heavily in AI research and infrastructure, the creation of sophisticated AI systems could revolutionize areas like personalized marketing and supply chain optimization.

It is important to consider the potential impact on jobs and the workforce, Jiahao Sun, founder and CEO at FLock.io, a platform for decentralized AI models, said in an interview with PYMNTS.

As AI becomes more capable in multimodal and integrated into commerce, it may automate industries that currently cannot easily be transferred into a chatbot interface, such as manufacturing, healthcare, sports coaching, etc.

Microsoft and OpenAIs $100 billion project could make AI chips more scarce, leading to more price spikes and leaving more businesses and governments behind due to limited access to hardware, CEO and co-founder of AI company NeuReality Moshe Tanachtold PYMNTS, while adding that projects like Stargate will drive commerce forward in the short term.

The installed hardware will fuel more AI projects, features and use cases, leading Microsoft to offer it at consumable prices, driving innovation on the consumer side with secondary use cases built on this accessible AI technology, Tanach said.

Read the original here:

DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com