Archive for the ‘Machine Learning’ Category

Google and OpenAI are Walmarts besieged by fruit stands – TechCrunch

Image Credits: Tim Boyle / Getty Images

OpenAI may be synonymous with machine learning now and Google is doing its best to pick itself up off the floor, but both may soon face a new threat: rapidly multiplying open source projects that push the state of the art and leave the deep-pocketed but unwieldy corporations in their dust. This Zerg-like threat may not be an existential one, but it will certainly keep the dominant players on the defensive.

The notion is not new by a long shot in the fast-moving AI community, its expected to see this kind of disruption on a weekly basis but the situation was put in perspective by a widely shared document purported to originate within Google. We have no moat, and neither does OpenAI, the memo reads.

I wont encumber the reader with a lengthy summary of this perfectly readable and interesting piece, but the gist is that while GPT-4 and other proprietary models have obtained the lions share of attention and indeed income, the head start theyve gained with funding and infrastructure is looking slimmer by the day.

While the pace of OpenAIs releases may seem blistering by the standards of ordinary major software releases, GPT-3, ChatGPT and GPT-4 were certainly hot on each others heels if you compare them to versions of iOS or Photoshop. But they are still occurring on the scale of months and years.

What the memo points out is that in March, a leaked foundation language model from Meta, called LLaMA, was leaked in fairly rough form. Within weeks, people tinkering around on laptops and penny-a-minute servers had added core features like instruction tuning, multiple modalities and reinforcement learning from human feedback. OpenAI and Google were probably poking around the code, too, but they didnt couldnt replicate the level of collaboration and experimentation occurring in subreddits and Discords.

Could it really be that the titanic computation problem that seemed to pose an insurmountable obstacle a moat to challengers is already a relic of a different era of AI development?

Sam Altman already noted that we should expect diminishing returns when throwing parameters at the problem. Bigger isnt always better, sure but few would have guessed that smaller was instead.

The business paradigm being pursued by OpenAI and others right now is a direct descendant of the SaaS model. You have some software or service of high value and you offer carefully gated access to it through an API or some such. Its a straightforward and proven approach that makes perfect sense when youve invested hundreds of millions into developing a single monolithic yet versatile product like a large language model.

If GPT-4 generalizes well to answering questions about precedents in contract law, great never mind that a huge number of its intellect is dedicated to being able to parrot the style of every author who ever published a work in the English language. GPT-4 is like a Walmart. No one actually wants to go there, so the company makes damn sure theres no other option.

But customers are starting to wonder, why am I walking through 50 aisles of junk to buy a few apples? Why am I hiring the services of the largest and most general-purpose AI model ever created if all I want to do is exert some intelligence in matching the language of this contract against a couple hundred other ones? At the risk of torturing the metaphor (to say nothing of the reader), if GPT-4 is the Walmart you go to for apples, what happens when a fruit stand opens in the parking lot?

It didnt take long in the AI world for a large language model to be run, in highly truncated form of course, on (fittingly) a Raspberry Pi. For a business like OpenAI, its jockey Microsoft, Google or anyone else in the AI-as-a-service world, it effectively beggars the entire premise of their business: that these systems are so hard to build and run that they have to do it for you. In fact it starts to look like these companies picked and engineered a version of AI that fit their existing business model, not vice versa!

Once upon a time you had to offload the computation involved in word processing to a mainframe your terminal was just a display. Of course that was a different era, and weve long since been able to fit the whole application on a personal computer. That process has occurred many times since as our devices have repeatedly and exponentially increased their capacity for computation. These days when something has to be done on a supercomputer, everyone understands that its just a matter of time and optimization.

For Google and OpenAI, the time came a lot quicker than expected. And they werent the ones to do the optimizing and may never be at this rate.

Now, that doesnt mean that theyre plain out of luck. Google didnt get where it is by being the best not for a long time, anyway. Being a Walmart has its benefits. Companies dont want to have to find the bespoke solution that performs the task they want 30% faster if they can get a decent price from their existing vendor and not rock the boat too much. Never underestimate the value of inertia in business!

Sure, people are iterating on LLaMA so fast that theyre running out of camelids to name them after. Incidentally, Id like to thank the developers for an excuse to just scroll through hundreds of pictures of cute, tawny vicuas instead of working. But few enterprise IT departments are going to cobble together an implementation of Stabilitys open source derivative-in-progress of a quasi-legal leaked Meta model over OpenAIs simple, effective API. Theyve got a business to run!

But at the same time, I stopped using Photoshop years ago for image editing and creation because the open source options like Gimp and Paint.net have gotten so incredibly good. At this point, the argument goes the other direction. Pay how much for Photoshop? No way, weve got a business to run!

What Googles anonymous authors are clearly worried about is that the distance from the first situation to the second is going to be much shorter than anyone thought, and there doesnt appear to be a damn thing anybody can do about it.

Except, the memo argues: embrace it. Open up, publish, collaborate, share, compromise. As they conclude:

Google should establish itself a leader in the open source community, taking the lead by cooperating with, rather than ignoring, the broader conversation. This probably means taking some uncomfortable steps, like publishing the model weights for small ULM variants. This necessarily means relinquishing some control over our models. But this compromise is inevitable. We cannot hope to both drive innovation and control it.

Visit link:
Google and OpenAI are Walmarts besieged by fruit stands - TechCrunch

Meta Platforms scoops up AI networking chip team from Graphcore – The Economic Times

Meta Platforms Inc has hired an Oslo-based team that until late last year was building artificial intelligence networking technology at British chip unicorn Graphcore. A Meta spokesperson confirmed the hirings in response to a request for comment, after Reuters identified 10 people whose LinkedIn profiles said they worked at Graphcore until December 2022 or January 2023 and subsequently joined Meta in February or March of this year.

"We recently welcomed a number of highly-specialized engineers in Oslo to our infrastructure team at Meta. They bring deep expertise in the design and development of supercomputing systems to support AI and machine learning at scale in Meta's data centers," said Jon Carvill, the Meta spokesperson.

On top of that, it is now rushing to join competitors like Microsoft Corp and Alphabet Inc's Google in releasing generative AI products capable of creating human-like writing, art and other content, which investors see as the next big growth area for tech companies.

Carvill declined to say what they would be working on at Meta.

Meta already has an in-house unit designing several kinds of chips aimed at speeding up and maximizing efficiency for its AI work, including a network chip that performs a sort of air traffic control function for servers, two sources told Reuters.

A new category of network chip has emerged to help keep data moving smoothly within those computing clusters. Nvidia, AMD and Intel Corp all make such network chips.

Graphcore, one of the UK's most valuable tech startups, once was seen by investors like Microsoft and venture capital firm Sequoia as a promising potential challenger to Nvidia's commanding lead in the market for AI chip systems.

However, it faced a setback in 2020 when Microsoft scrapped an early deal to buy Graphcore's chips for its Azure cloud computing platform, according to a report by UK newspaper The Times. Microsoft instead used Nvidia's GPUs to build the massive infrastructure powering ChatGPT developer OpenAI, which Microsoft also backs.

Sequoia has since written down its investment in Graphcore to zero, although it remains on the company's board, according to a source familiar with the relationship. The write-down was first reported by Insider in October.

The Graphcore spokesperson confirmed the setbacks, but said the company was "perfectly positioned" to take advantage of accelerating commercial adoption of AI.

Graphcore was last valued at $2.8 billion after raising $222 million in its most recent investment round in 2020.

See the original post:
Meta Platforms scoops up AI networking chip team from Graphcore - The Economic Times

How to get going with machine learning – Robotics and Automation News

We can see everyone around us talking about machine learning and artificial intelligence. But is the hype of machine learning objective? Lets dive into the details of machine learning and how we can start it from scratch.

Machine learning is a technological method through which we teach our computers and electronic gadgets how to provide accurate answers. Whenever data is fed into the system, it acts in a defined way to find precise answers to those questions asked.

For example, questions such as: What is the taste of avocado?, What are the things to consider for buying an old car?, How do I drive safely on reload?, and so on.

But using machine language, the computer is trained to give precise answers even without input from developers. In other words, machine language is a sophisticated form of language in which computers are trained to provide correct answers to complicated questions.

Furthermore, they are trained to learn more, distinguish confusing questions, and provide satisfactory answers.

Machine learning and AI is the future. Therefore, people who can learn skills and become proficient will become the first in line to reap the profits. We have companies that offer machine learning services to augment your business.

In other words, to get unreal advantages, we must engage with these services for the exponential growth of our business.

Initially, the developers do a massive number of training and modeling. Other crucial things are also done by the developers for machine language development. Additionally, vast amounts of data are used to provide precise results and effectively reduce the decision taking time.

Here are the simple steps that can get you started with machine learning.

Make up your mind and choose a tool in which you want to master machine learning development.

Always look for the best language in terms of practicality and its acceptability on multiple platforms.

As we know, Machine learning is a process that involves a rigorous process of modeling and training. Therefore we must practice the given below bullet points.

To take the most advantage, create a delicate and lucid portfolio of yours to demonstrate your learned skills to the world. Keep in mind the below-mentioned bullet points too.

When we apply a precise algorithm to a data set, the output we get is called a Model. In other words, it is also known as Hypothesis.

In technical terms, a feature is a quantifiable property that defines the characteristics of a process in machine learning. One of the crucial characteristics of it is to recognize and classify algorithms. It is used as input into a model.

For example, to recognize a fruit, it uses features such as smell, taste, size, color, and so on. The element is vital in distinguishing the target or asked query using several characteristics.

The highest level of value or variable created by the machine learning model is called Target.

For example, In the previous set, we measured fruits. Each label has a specific fruit such as orange, banana, apple, pineapple, and so on.

In machine learning, Training is a term used for getting used to all the values and biases of our target examples. Under supervision during the learning process, many experiments are done to build a machine learning algorithm to reach the minimum loss going the correct output.

When a model is accomplished, we can set a variety of inputs that will give us the expected results as output. Always be careful and look that system is performing accurately on unseen data. Then only we can say it is a successful operation.

After preparing our model, we can input a set of data for which it will generate a predicted output or label. However, verifying its performance on new, untested data is essential before concluding that the machine is performing well.

As machine learning continues to increase in significance to enterprise operations and AI becomes more sensible in corporation settings, the machine learning platform wars will accentuate handiest.

Persisted research into deep studying and ai is increasingly targeted at developing different general applications. Cutting-edge AI models require sizeable training to produce an algorithm that is particularly optimized to perform one venture.

But some researchers are exploring approaches to make fashions greater bendy and are searching for techniques that allow a device to use context discovered from one project to future, specific tasks.

You might also like

Read the original:
How to get going with machine learning - Robotics and Automation News

Artificial Intelligence and Machine Learning in Cancer Detection – Targeted Oncology

Toufic Kachaamy, MD

City oh Hope Phoenix

Since the first artificial intelligence (AI) enabled medical device received FDA approval in 1995 for cervical slide interpretation, there have been 521 FDA approvals provided for AI-powered devices as of May 2023.1 Many of these devices are for early cancer detection, an area of significant need since most cancers are diagnosed at a later stage. For most patients, an earlier diagnosis means a higher chance of positive outcomes such as cure, less need for systemic therapy and a higher chance of maintaining a good quality of life after cancer treatment.

While an extensive review of these is beyond the scope of one article, this article will summarize the major areas where AI and machine learning (ML) are currently being used and studied for early cancer detection.

The first area is large database analyses for identifying patients at risk for cancer or with early signs of cancer. These models analyze the electronic medical records, a structured digital database, and use pattern recognition and natural language processing to identify patients with specific characteristics. These include individuals with signs and symptoms suggestive of cancer; those at risk of cancer based on known risk factors; or specific health measures associated with cancer. For example, pancreatic cancer has a relatively low incidence but is still the fourth leading cause of cancer death. Because of the low incidence, screening the general population is neither practical nor cost-effective. ML can be used to analyze specific health outcomes such as new onset hyperglycemia2 and certain health data from questionnaires (3) to classify members of the population as high risk for pancreatic cancer. This allows the screened population to be "enriched with pancreatic cancer," thus making screening higher yield and more cost-effective at an earlier stage.

Another area leveraging AI and ML learning is image analyses. The human vision is best centrally, representing less than 3 degrees of the visual field. Peripheral vision has significantly less special resolution and is more suited for rapid movements and "big picture" analysis. In addition, "inattentional blindness" or missing significant findings when focused on a specific task is one of the vulnerabilities of humans, as demonstrated in the study that showed even experts missed a gorilla in a CT when searching for lung nodules.3 Machines are not susceptible to fatigue, distraction, blind spots or inattentional blindness. In a study that compared a deep learning algorithm to radiologist from the National Lung Screening trial, the algorithm performed better than the radiologist in detecting lung cancer on chest X-rays.4

AI algorithm analysis of histologic specimens can serve as an initial screening tool and an assistant as a real-time interactive interface during histological analysis.5 AI is capable of diagnosing cancer with high accuracy.6 It can accurately determine grades, such as the Gleason score for prostate cancer and identify lymph node metastasis.7 AI is also being explored in predicting gene mutations from histologic analysis. This has the potential of decreasing cost and improving time to analysis. Both are limitations in today's practice limiting universal gene analysis in cancer patients,8 but at the same time are gaining a role in precision cancer treatment.9

An excitingand up-and-coming area where AI and deep learning are the combination of the above such as combining large data analysis with pathology assessment and/ or image analyses. For example, using medical record analysis and CXR findings, deep learning was used to identify patients at high risk for lung cancer and who would benefit the most from lung cancer screening. This has great potential, especially since only 5% of patients eligible for lung cancer screening are currently being screened.10

Finally, the holy grail of cancer detection: blood-based multicancer detection tests, many of which are already available and in development, often use AI algorithms to develop, analyze and validate their test.11

It is hard to imagine an area of medicine that AI and ML will not impact. AI is unlikely, at least for the foreseeable future, to replace physicians. It will be used to enhance physician performance, improve accuracy and efficiency. However, it is essential to note that machine-human interaction is very complicated, and we are scratching the surface of this era. It is premature to assume that real-world outcomes will be like outcomes seen in trials. Any outcome that involves human analysis and final decision-making is affected by human performance. Training and studying human behavior are needed for human-machine interaction to produce optimal outcomes. For example, randomized controlled studies have shown increased polyp detection during colonoscopy using computer-aided detection or AI-based image analysis.12 However, real-life data did not show similar findings13 likely due to a difference in how AI impacts different endoscopists.

Artificial intelligence and machine learning dramatically alter how medicine is practiced, and cancer detection is no exception. Even in the medical world, where change is typically slower than in other disciplines, AI's pace of innovation is coming upon us quickly and, in certain instances, faster than many can grasp and adapt.

Here is the original post:
Artificial Intelligence and Machine Learning in Cancer Detection - Targeted Oncology

ASCRS 2023: Predicting vision outcomes in cataract surgery with … – Optometry Times

Mark Packer, MD, sat down with Sheryl Stevenson, Group Editorial Director,Ophthalmology Times, to discuss his presentation on machine learning and predicting vision outcomes after cataract surgery at the 2023 ASCRS annual meeting in San Diego

Editors note:This transcript has been edited for clarity.

Sheryl Stevenson:

We're joined by Dr. Mark Packer, who will be presenting at this year's ASCRS. Hello to Dr. Packard. Great to see you again.

Mark Packer, MD:

Good to see you, Sheryl.

Stevenson:

Sure, tell us a little bit about your talk about machine learning, and visual, predicting vision outcomes after cataract surgery.

Packer:

Sure, well, as we know, humans tend to be fallible, and even though surgeons don't like to admit it, they have been prone to make errors from time to time. And you know, one of the errors that we make is that we always extrapolate from our most recent experience. So if I just had a patient who was very unhappy with a multifocal IOL, all of a sudden, I'm going to be a lot more cautious with my next patient, and maybe the one after that, too.

And, the reverse can happen as well. If I just had a patient who was absolutely thrilled with their toric multifocal, and they never have to wear glasses again, and they're leaving for Hawaii in the morning, you know, getting a full makeover, I'm going to think, wow, that was the best thing I ever did. And now all of a sudden, everyone looks like a candidate. and even for someone like me, who has been doing multifocal IOL for longer than I care to admit, you know, this can still pose a problem. That's just human nature.

And, so what we're attempting to do with the oculotics program is to bring a little objectivity into the mix. Now, of course, we already do that, when we talked about IOL power calculations, we, we leave that up to algorithms and let them do the work. One of the things that we've been able to do with oculotics is actually improve upon the way that power calculations are done. So rather than just looking at the Dioptric power of a lens, for example, we're actually looking at the real optical properties of the lens, the modulation transfer function, in order to help correlate that with what a patient desires in terms of spectacle independence.

But the real brainchild here is the idea of incorporating patient feedback after surgery into the decision making process. So part of this is actually to give our patients and app that they can use to then provide feedback on their level of satisfaction, essentially, by filling out the VFQ-25, which is a simply, a 25 item questionnaire that was developed in the 1990s by RAND Corporation, to look at visual function and how satisfied people are with their vision, whether they have to worry about it, and how they feel about their vision, that sort of thing, whether they can drive at night comfortably and all that.

So if we can incorporate that feedback into our decision making, now instead of my going into the next room, you know, with fresh in my mind just what happened today, actually, I'll be incorporating the knowledge of every patient that I've operated on since I started using this system, and how they fared with these different IOLs.

So the machine learning algorithm can actually take this patient feedback and put that together with the preoperative characteristics such as, you know, personal items, such as hobbies, what they do for recreation, what their employment is, what kind of visual demands they have. And also anatomic factors, you know, the axial length, anterior chamber depth, corneal curvature, all of that, put that all together, and then we can begin to match inter ocular lens selection, actually to patients based not only on their biometry, but also on their personal characteristics, and how they actually felt about the results of their surgery.

So that's how I think machine learning can help us, and hopefully bring surgeons up to speed with premium IOLs more quickly because, you know, it's taken some of us years and years to gain the experience to really become confident in selecting which patients are right for premium lenses, particularly multifocal extended depth of focus lenses and that sort of thing where, you know, there are visual side effects, and there are limitations, but there also are great advantages. And so hopefully using machine learning can bring young surgeons up more quickly increase their confidence and allow them to increase the rate of adoption among their patients for these premium lenses.

The rest is here:
ASCRS 2023: Predicting vision outcomes in cataract surgery with ... - Optometry Times