Media Search:



Young professionals are turning to AI to create headshots. But there … – NPR

The photo on the left was what Sophia Jones fed the AI service. It generated the two images on the right. Sophia Jones hide caption

The photo on the left was what Sophia Jones fed the AI service. It generated the two images on the right.

Sophia Jones is juggling a lot right now. She just graduated from her master's program, started her first full-time job with SpaceX and recently got engaged. But thanks to technology, one thing isn't on her to-do list: getting professional headshots taken.

Jones is one of a growing number of young professionals who are relying not on photographers to take headshots, but on generative artificial intelligence.

The process is simple enough: Users send in up to a dozen images of themselves to a website or app. Then they pick from sample photos with a style or aesthetic they want to copy, and the computer does the rest. More than a dozen of these services are available online and in app stores.

For Jones, the use of AI-generated headshots is a matter of convenience, because she can tweak images she already has and use them in a professional setting. She found out about AI-generated headshots on TikTok, where they went viral recently, and has since used them in everything from her LinkedIn profile to graduation pamphlets, and in her workplace.

So far no one has noticed.

"I think you would have to do some serious investigating and zooming in to realize that it might not truly be me," Jones told NPR.

Still, many of these headshot services are far from perfect. Some of the generated photos give users extra hands or arms, and they have consistent issues around perfecting teeth and ears.

These issues are likely a result of the data sets that the apps and services are trained on, according to Jordan Harrod, a Ph.D. candidate who is popular on YouTube for explaining how AI technology works.

Harrod said some AI technology being used now is different in that it learns what styles a user is looking for and applies them "almost like a filter" to the images. To learn these styles, the technology combs through massive data sets for patterns, which means the results are based on the things it's learning from.

"Most of it just comes from how much training data represents things like hands and ears and hair in various different configurations that you'd see in real life," Harrod said. And when the data sets underrepresent some configurations, some users are left behind or bias creeps in.

Rona Wang is a postgraduate student in a joint MIT-Harvard computer science program. When she used an AI service, she noticed that some of the features it added made her look completely different.

"It made my skin kind of paler and took out the yellow undertones," Wang said, adding that it also gave her big blue eyes when her eyes are brown.

Others who have tried AI headshots have pointed out similar errors, noticing that some websites make women look curvier than they are and that they can wash out complexions and have trouble accurately depicting Black hairstyles.

"When it comes to AI and AI bias, it's important for us to be thinking about who's included and who's not included," Wang said.

For many, the decision may come down to cost and accessibility.

Grace White, a law student at the University of Arkansas, was an early adopter of AI headshots, posting about her experience on TikTok and attracting more than 50 million views.

The close-up photo on the right was one of 10 real images that Grace White submitted to an AI service, which generated the two images on the left. Grace White hide caption

Ultimately, White didn't use the generated images and opted for a professional photographer to take her photo, but she said she recognizes that not everyone has the same budget flexibility.

"I do understand people who may have a lower income, and they don't have the budget for a photographer," White said. "I do understand them maybe looking for the AI route just to have a cheaper option for professional headshots."

Go here to see the original:

Young professionals are turning to AI to create headshots. But there ... - NPR

Generative AI and data analytics on the agenda for Pamplin’s Day … – Virginia Tech

On Friday, Sept. 8, the second annual Day for Data symposium will gather industry leaders and academia together for a practical exploration of business analytics. The event is scheduled from 8 a.m. to 4 p.m. EDT in Virginia Tech's Owens Ballroom.

Virginia Tech is a leader in advanced analytics programs and capabilities, said Jay Winkeler, executive director of the Center for Business Analytics. Building off the success from last year, Day for Data will be bigger and bolder, with a focus on the AI [artificial intelligence] revolution happening all around us.

The conference, hosted by the Pamplin College of Businesss Center for Business Analytics, is an opportunity for shared learning and thought leadership in the field of business analytics. Corporate leaders and university faculty converge to fill a robust agenda with expertise in a wide range of topics including generative AI and large language models, advanced data analytics, digital privacy, business leadership and intelligence, and more.

Beyond the rich learning component, Day for Data also lends itself to opportunities for professional advancement. With a strong turnout expected from both academia and industry, the event offers students a chance to see the real-world applications of their studies and companies an opportunity to scout for emerging talent.

The interaction between students, faculty, and corporations is critical to harnessing the power of analytics and showing how skilled professionals translate analytics into meaningful business decisions, said Winkeler. For industry professionals, it is a chance to tell their success stories and gain critical exposure to a talented student and faculty population.

The symposium will begin with opening remarks by Saonee Sarker, Richard E. Sorensen Dean for the Pamplin College of Business, followed by a keynote address from Andrew Allwine, senior director of data optimization for Norfolk Southern. During the session, Allwine will share his strategies for aggregating and translating complex datasets into actionable insights and tangible return on investment for organizational decision-makers.

Key contributions by faculty working within Pamplin include a session led by Voices of Privacy, an initiative spearheaded by Professors France Blanger and Donna Wertalik that seeks to prepare society to manage their information privacy amid the challenging modern digital landscape, as well as a research poster session highlighting the latest research in the field.

After a lunch and networking break, Keith Johnson, director of solutions architecture for partner systems integrators at Amazon Web Services, will deliver a presentation and live demonstration of Amazons latest innovations with generative AI and large language models. Tracy Jones, data strategy and management executive for Guidehouse, will follow with a session on the opportunities and threats of artificial intelligence implementation, including case studies of organizations that neglected ethical principles and suffered consequences.

Both experts will return to join Kevin Davis, chief growth officer for MarathonTS, and Cayce Myers, director of graduate studies for the School of Communication at Virginia Tech, for a panel discussion and interactive conversation on artificial intelligence, including ethical, legal, and technical considerations. Day for Data will conclude with a networking reception.

Day for Data 2023 is sponsored by Norfolk Southern, Guidehouse, MarathonTS, Ernst & Young, and Amazon Web Services.

For more information on Day for Data and to register, please visit the event page.

Read the original post:

Generative AI and data analytics on the agenda for Pamplin's Day ... - Virginia Tech

How to put words into a Bitcoin address? Here’s how vanity … – Cointelegraph

Have you ever wondered whether a Bitcoin (BTC) address or a string of 2635 alphanumeric characters can happen to have human-readable words instead of random letters?

Youve probably heard of the Lightning Network, which allows you to create a fancy BTC address that looks like an email or a web domain. But theres also a way of creating Bitcoin addresses containing human-readable words on the original Bitcoin blockchain. Such addresses are known as vanity Bitcoin addresses.

A vanity Bitcoin address is a personalized BTC address that contains a specific pattern or word in a part of its total 26-35 character string of letters and numbers. Unlike a usual Bitcoin address which is made of random characters a vanity Bitcoin address allows users to customize their addresses or even send a specific message just within the address.

The term vanity address comes from the plain meaning of the word vanity, which is used to express inflated pride in oneself or ones appearance. In line with the direct meaning, vanity addresses are used by those who want to stand out and give their wallet address a unique identity.

Vanity Bitcoin addresses became popular a few years after the anonymous Bitcoin creator Satoshi Nakamoto launched the cryptocurrency back in 2009. The first vanity address generator, called VanityGen, was released as an open-source platform on GitHub in 2012. One of the first references to vanity addresses on Bitcointalk.org a major crypto forum created by Nakamoto goes back to 2013.

According to Trezors Bitcoin analyst Josef Tetek, Nakamoto didnt use vanity addresses: He disappeared from the public before vanity addresses became popular, Tetek told Cointelegraph, referring to Nakamotos vanishing in 2011.

Besides the Bitcoin blockchain, vanity addresses are also available on other networks, including the Ethereum blockchain. Unlike Bitcoin vanity addresses, which allow users to choose among 2635 alphanumeric characters, Ethereum vanity addresses only feature hexadecimal numbers, as Ether (ETH) addresses can only include letters A through F and numbers zero through nine.

According to the ETH Optimism vanity address generator, creating an Ethereum vanity address starting with 0xFad69 would take up to five minutes.

There are two ways of creating a vanity BTC address: manually and using specialized vanity address generator services. The first method relies on software and requires some computing power and coding skills to run programs to find Bitcoin addresses starting with a specific word combination.

Many Bitcoin experts like Trezors Tetek agree that the first method is the most secure way of creating a vanity Bitcoin address, as this method allows users to keep their seed phrase private. Being the only owner of a private key or a seed phrase enables the user to be the sole holder of the funds associated with the address.

The manual method requires installing vanity address-generating software like VanityGen, which is available on the cloud-based software website GitHub. Running such software requires certain computing power specs, with larger sequences of symbols demanding more time to create a vanity address.

Various sources estimate that generating a vanity address containing a five-symbol word takes about one hour using a regular personal computer, while larger sequences like seven symbols could take up to three months. More sophisticated setups involving powerful graphic cards or even application-specific integrated circuit (ASIC) chips can significantly reduce the time needed to generate a vanity address.

The second method of creating a vanity address is more straightforward but less secure as it relies on delegating the address search to third-party services, also known as vanity address miners.

Reliance on Bitcoin vanity services is associated with major risks, as miners can potentially take over the address and its assets at any time. That is because such miners are the first to receive the private key before passing it to the customer. The private key is generated at the moment of creating a Bitcoin address and cannot be changed afterward.

The vanity generation service is often offered via websites like Vanitygen.net, allowing users to simply order a certain desired word or sequence to be searched with computing power bought online. Such services often allow users to order a sequence of letters up to eight symbols. Once generated, the private key for the vanity address is sent to the customers email in exchange for the agreed price.

For example, generating a Bitcoin vanity address starting with 1Satoshi would cost around 0.0217 BTC, worth around $600 at the time of writing. Larger sequences like 1Nakamoto would require at least 0.11 BTC, or as much as $3,250.

Its important to note that not all letters and numbers can be included in a vanity Bitcoin address, just like a normal BTC address. Some letters, like the uppercase letter for O, the uppercase letter for I, the lowercase letter for L, and the number 0, are excluded from the set of 2635 alphanumeric characters available in all Bitcoin addresses. The exclusions aim to help users avoid confusion when sending funds on the Bitcoin blockchain.

A decision on whether or not to use a Bitcoin vanity address ultimately depends on the reasons for having such an address in the first place, taking into account all possible risks. Some cryptocurrency exchanges like BitMEX have experimented with vanity addresses using the native Segregated Witness (SegWit) address format Bech32 with the bc1qmex prefix.

A spokesperson for BitMEX told Cointelegraph that most vanity addresses are used for marketing or considered a bit of fun.

Bitcoin vanity addresses were quite popular on BitcoinTalk circa 2011, when many solicited donations to their personal vanity address, for example, 1Name, the BitMEXs representative noted, adding:

The firm also attempted to use vanity addresses to make it harder for attackers to scam users since BitMEX only gave vanity addresses to users. However, one should not rely on vanity addresses as a security mechanism, as more advanced attackers could manage to copy the vanity address format, the representative noted.

BitMEXs spokesperson says vanity addresses are best suited for advanced users: The main weakness for individual users is reduced privacy. In general, we would advise users not to reuse addresses at all, adding that newer BitMEX customer addresses no longer feature a vanity prefix.

Trezors Bitcoin expert Tetek strongly advised against using vanity addresses because such addresses even if generated in a secure manner promote address reuse, which is a bad practice in terms of privacy. He said:

Besides privacy and asset safety risks, vanity BTC addresses are also associated with security vulnerabilities. In 2022, hackers managed to steal $3.3 million in crypto through a vulnerability in Ethereum vanity address-generating tool Profanity. Additionally, in March 2023, attackers also used hacked vanity addresses to steal $500,000 worth of tokens from layer-2 scaling solution Arbitrums airdrop.

Despite Bitcoin vanity addresses becoming much less popular since 2011, there is no evidence that such addresses have not been used in recent years.

One report recently described the use of a Bitcoin vanity address containing swearing words apparently directed toward Russias President Vladimir Putin. The address has transacted a total of 0.29 BTC ($7,595) in 67 transactions between 2018 and 2020, turning its balance to zero.

One of its last recorded transactions included a 0.0004 BTC ($10) transaction to the public Bitcoin address of famous Bitcoin critic Warren Buffet, who was given a BTC address and a gift from Tron founder Justin Sun.

Moreover, challenges and considerations persist. For instance, the security risks linked to vanity address generators must be addressed, prompting the development of more secure and user-friendly tools. Vanity address creation could become more streamlined and available to a wider audience, not just those with coding expertise, as blockchain systems develop and incorporate new features.

However, the privacy issues raised by the reuse of addresses will remain a crucial consideration. Therefore, users who want personalized addresses must balance the advantages of uniqueness against possible privacy breaches.

While its important to understand that Bitcoin vanity addresses are quite risky and expensive, such addresses apparently unlock some new and maybe weird use cases of the cryptocurrency. With that in mind, its up to Bitcoin users whether the future of Bitcoin vanity addresses is bright or not.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

More here:

How to put words into a Bitcoin address? Here's how vanity ... - Cointelegraph

AI helps robots manipulate objects with their whole bodies – MIT News

Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box.

Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carriers fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable.

Now MIT researchers found a way to simplify this process, known as contact-rich manipulation planning. They use an AI technique called smoothing, which summarizes many contact events into a smaller number of decisions, to enable even a simple algorithm to quickly identify an effective manipulation plan for the robot.

While still in its early days, this method could potentially enable factories to use smaller, mobile robots that can manipulate objects with their entire arms or bodies, rather than large robotic arms that can only grasp using fingertips. This may help reduce energy consumption and drive down costs. In addition, this technique could be useful in robots sent on exploration missions to Mars or other solar system bodies, since they could adapt to the environment quickly using only an onboard computer.

Rather than thinking about this as a black-box system, if we can leverage the structure of these kinds of robotic systems using models, there is an opportunity to accelerate the whole procedure of trying to make these decisions and come up with contact-rich plans, says H.J. Terry Suh, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this technique.

Joining Suh on the paper are co-lead author Tao Pang PhD 23, a roboticist at Boston Dynamics AI Institute; Lujie Yang, an EECS graduate student; and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research appears this week in IEEE Transactions on Robotics.

Learning about learning

Reinforcement learning is a machine-learning technique where an agent, like a robot, learns to complete a task through trial and error with a reward for getting closer to a goal. Researchers say this type of learning takes a black-box approach because the system must learn everything about the world through trial and error.

It has been used effectively for contact-rich manipulation planning, where the robot seeks to learn the best way to move an object in a specified manner.

But because there may be billions of potential contact points that a robot must reason about when determining how to use its fingers, hands, arms, and body to interact with an object, this trial-and-error approach requires a great deal of computation.

Reinforcement learning may need to go through millions of years in simulation time to actually be able to learn a policy, Suh adds.

On the other hand, if researchers specifically design a physics-based model using their knowledge of the system and the task they want the robot to accomplish, that model incorporates structure about this world that makes it more efficient.

Yet physics-based approaches arent as effective as reinforcement learning when it comes to contact-rich manipulation planning Suh and Pang wondered why.

They conducted a detailed analysis and found that a technique known as smoothing enables reinforcement learning to perform so well.

Many of the decisions a robot could make when determining how to manipulate an object arent important in the grand scheme of things. For instance, each infinitesimal adjustment of one finger, whether or not it results in contact with the object, doesnt matter very much. Smoothing averages away many of those unimportant, intermediate decisions, leaving a few important ones.

Reinforcement learning performs smoothing implicitly by trying many contact points and then computing a weighted average of the results. Drawing on this insight, the MIT researchers designed a simple model that performs a similar type of smoothing, enabling it to focus on core robot-object interactions and predict long-term behavior. They showed that this approach could be just as effective as reinforcement learning at generating complex plans.

If you know a bit more about your problem, you can design more efficient algorithms, Pang says.

A winning combination

Even though smoothing greatly simplifies the decisions, searching through the remaining decisions can still be a difficult problem. So, the researchers combined their model with an algorithm that can rapidly and efficiently search through all possible decisions the robot could make.

With this combination, the computation time was cut down to about a minute on a standard laptop.

They first tested their approach in simulations where robotic hands were given tasks like moving a pen to a desired configuration, opening a door, or picking up a plate. In each instance, their model-based approach achieved the same performance as reinforcement learning, but in a fraction of the time. They saw similar results when they tested their model in hardware on real robotic arms.

The same ideas that enable whole-body manipulation also work for planning with dexterous, human-like hands. Previously, most researchers said that reinforcement learning was the only approach that scaled to dexterous hands, but Terry and Tao showed that by taking this key idea of (randomized) smoothing from reinforcement learning, they can make more traditional planning methods work extremely well, too, Tedrake says.

However, the model they developed relies on a simpler approximation of the real world, so it cannot handle very dynamic motions, such as objects falling. While effective for slower manipulation tasks, their approach cannot create a plan that would enable a robot to toss a can into a trash bin, for instance. In the future, the researchers plan to enhance their technique so it could tackle these highly dynamic motions.

If you study your models carefully and really understand the problem you are trying to solve, there are definitely some gains you can achieve. There are benefits to doing things that are beyond the black box, Suh says.

This work is funded, in part, by Amazon, MIT Lincoln Laboratory, the National Science Foundation, and the Ocado Group.

See more here:

AI helps robots manipulate objects with their whole bodies - MIT News

How to minimize data risk for generative AI and LLMs in the enterprise – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Enterprises have quickly recognized the power of generative AI to uncover new ideas and increase both developer and non-developer productivity. But pushing sensitive and proprietary data into publicly hosted large language models (LLMs) creates significant risks in security, privacy and governance. Businesses need to address these risks before they can start to see any benefit from these powerful new technologies.

As IDC notes, enterprises have legitimate concerns that LLMs may learn from their prompts and disclose proprietary information to other businesses that enter similar prompts. Businesses also worry that any sensitive data they share could be stored online and exposed to hackers or accidentally made public.

That makes feeding data and prompts into publicly hosted LLMs a nonstarter for most enterprises, especially those operating in regulated spaces. So, how can companies extract value from LLMs while sufficiently mitigating the risks?

Instead of sending your data out to an LLM, bring the LLM to your data. This is the model most enterprises will use to balance the need for innovation with the importance of keeping customer PII and other sensitive data secure. Most large businesses already maintain a strong security and governance boundary around their data, and they should host and deploy LLMs within that protected environment. This allows data teams to further develop and customize the LLM and employees to interact with it, all within the organizations existing security perimeter.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

A strong AI strategy requires a strong data strategy to begin with. That means eliminating silos and establishing simple, consistent policies that allow teams to access the data they need within a strong security and governance posture. The end goal is to have actionable, trustworthy data that can be accessed easily to use with an LLM within a secure and governed environment.

LLMs trained on the entire web present more than just privacy challenges. Theyre prone to hallucinations and other inaccuracies and can reproduce biases and generate offensive responses that create further risk for businesses. Moreover, foundational LLMs have not been exposed to your organizations internal systems and data, meaning they cant answer questions specific to your business, your customers and possibly even your industry.

The answer is to extend and customize a model to make it smart about your own business. While hosted models like ChatGPT have gotten most of the attention, there is a long and growing list of LLMs that enterprises can download, customize, and use behind the firewall including open-source models like StarCoder from Hugging Face and StableLM from Stability AI. Tuning a foundational model on the entire web requires vast amounts of data and computing power, but as IDC notes, once a generative model is trained, it can be fine-tuned for a particular content domain with much less data.

An LLM doesnt need to be vast to be useful. Garbage in, garbage out is true for any AI model, and enterprises should customize models using internal data that they know they can trust and that will provide the insights they need. Your employees probably dont need to ask your LLM how to make a quiche or for Fathers Day gift ideas. But they may want to ask about sales in the Northwest region or the benefits a particular customers contract includes. Those answers will come from tuning the LLM on your own data in a secure and governed environment.

In addition to higher-quality results, optimizing LLMs for your organization can help reduce resource needs. Smaller models targeting specific use cases in the enterprise tend to require less compute power and smaller memory sizes than models built for general-purpose use cases or a large variety of enterprise use cases across different verticals and industries. Making LLMs more targeted for use cases in your organization will help you run LLMs in a more cost-effective, efficient way.

Tuning a model on your internal systems and data requires access to all the information that may be useful for that purpose, and much of this will be stored in formats besides text. About 80% of the worlds data is unstructured, including company data such as emails, images, contracts and training videos.

That requires technologies like natural language processing to extract information from unstructured sources and make it available to your data scientists so they can build and train multimodal AI models that can spot relationships between different types of data and surface these insights for your business.

This is a fast-moving area, and businesses must use caution with whatever approach they take to generative AI. That means reading the fine print about the models and services they use and working with reputable vendors that offer explicit guarantees about the models they provide. But its an area where companies cannot afford to stand still, and every business should be exploring how AI can disrupt its industry. Theres a balance that must be struck between risk and reward, and by bringing generative AI models close to your data and working within your existing security perimeter, youre more likely to reap the opportunities that this new technology brings.

Torsten Grabs is senior director of product management at Snowflake.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Continue reading here:

How to minimize data risk for generative AI and LLMs in the enterprise - VentureBeat