Archive for the ‘Machine Learning’ Category

Making an Impact: IoT and Machine Learning in Business – Finextra

Two is better than one, isnt it? This is undoubtedly true in the case of IoT and machine learning. These two most popular and trending technologies are offering a solid growth system for companies if implemented together correctly. When combined, they help you unlock the true power of data and boost business efficiency, sales, and customer relationships.

Therefore, incorporation of IoT and machine learning in business is seen on a wide scale. We are going to discuss some of the popular areas where these technologies are used. Before that, lets see some statistics around them.

Statistics Showing Trend of IoT and MLAccording to IoT analytics, the world will have 14.4 billion IoT-connected devices by the end of 2022 which is 10% more than the previous year.

By 2025, this number will reach approximately 27 billion clearly indicating that businesses are quickly adopting it. The market of machine learning, on the other hand, is expected to cross the $200 billion mark by 2025. These figures are enough to confidently say that the market of IoT and machine learning are not going to slow down anytime, but rather will increase over time.

Now, a question pops up: what are the benefits of using IoT and machine learning in business? First things first, knowing how they work together will help you understand the true value they add to your business.

How IoT and Machine Learning Work Together?As the name suggested, the Internet of things is a network of all devices having sensors, connected through the internet. This connection gives them the ability to communicate with any other device on the network.

What after that? How will you put that data to use? Machine learning is the answer. It is a subset of AI and a process of using data to develop mathematical models or algorithms to train the computer without much human interference.

With that learning, the system can be used to anticipate the most likely plot based on the data. The prediction can be wrong or right and depending on that algorithm updates itself to deliver a better possible scenario next time.

Thus, both complement each other to give a competitive advantage to businesses over others through data accumulation and analysis so that they can decide whats better for their growth. This is true for every type of sector, be it healthcare, finance, automotive, agriculture, manufacturing, and more.

But theres more than the above-mentioned reason to use IoT and machine learning in business processes. Lets understand their role in different businesses better and what advantages they offer.

Benefits of IoT and Machine Learning for Businesses -It Automates the Business ProcessesFor any organization, whether small or large, there are a certain set of business processes. Each one should be efficient to achieve the organizations goal. However, monotonous tasks like scheduling emails or record-keeping processes can cause unnecessary delays and hamper overall productivity.

Machine learning and IoT can automate those boring and repetitive tasks to streamline the business process. Not just that, it reduces the chances of human errors, and inefficiencies, improves follow-up with the lead, scheduling of marketing campaigns, events, etc.

Adds an Extra Layer of SecurityNo place is protected from accidents, frauds, and cyber-attacks. They are common in the industry and if not addressed immediately can cause major losses to the business, its employees, and customers.

But it is hard to keep an eye on every single area or device. Using IoT and machine learning in business not only help in monitoring each aspect to identify loopholes and threats but also let you take necessary preventive measures beforehand.

Helps Identifying the Productive ResourcesWhether it's financial, human, physical, or technological resources your business has, it is essential to filter out the most productive ones and eliminate the rarely used resources. With use of IoT and machine learning in business processes, you can assist you in analysing this and prevent unnecessary expenses on those unused and non-productive resources. They can also suggest where your company needs to utilize those resources.

Helps Understanding the CustomersCustomers are an important asset of any company. Making them satisfied is thus important to be successful and increase revenue. Machine learning and IoT can help companies in delivering what their customers want without guessing it. They can learn how customers are interacting with their brand and what things they dislike or like the most.

With all the valuable insights in your hands, you can create products and services they are expecting the most. Or analyze which one is doing good in the market. This way brands can benefit in two ways- delivering better customer experience and increasing revenue by delivering the right products to the audience. For e-commerce platforms, machine learning and IoT are the go-to technologies to achieve this.

Use Cases of IoT and Machine Learning in Various Businesses -Retail Industry: Supply Chain ManagementThe supply chain industry is data-reliant which means wrong or incomplete data can cause several issues in the process. Cost inefficiency, technical downtimes, problem in determining price and transportation costs, inventory theft and loss, etc are a few such problems they face.

Implementing IoT sensors on the devices involved to extract vital data and then send them to machine-learning models can help in the following ways.

Improve the quality of products Reduce operational costs Check the status of delivery Prevent inventory theft and fraud Maintain the balance between demand and supply Improve supply chain visibility to boost customer satisfaction Boost transportation of goods across borders Increase operational efficiency and revenue opportunities Check for any defects in the product or industrial equipment

Automotive Industry: Self-Driving CarsIoT sensors are enhancing the capabilities of vehicles making them smarter and more independent. We call them smart cars or self-driving cars, where human presence is not even an option. Together with artificial intelligence and machine learning, these vehicles can evaluate the situation on the road and can make better decisions in real-time.

They now have reliable cameras to get a clear understanding of roads. Radar detectors allow autonomous vehicles to see even at night thus improving their visibility.

Healthcare Industry: Smart Healthcare SolutionsPatient monitoring has become easy with machine learning and IoT. Doctors can now get real-time data on patients health conditions from connected gadgets and suggest tailored treatments.

Remote glucose monitoring is one such use case where doctors can monitor the glucose level of patients through CGM( continuous glucose monitoring) systems. If there is any anomaly in the glucose level, a warning notification is issued so that patients can immediately connect to the doctor and get the necessary treatment.

AI-equipped Apple Watch is another best use case of machine learning and IoT. The smartwatch is very useful in monitoring the heartbeat. According to a study by Cardiogram, the Apple watch gives 97 percent accurate results on heart rate monitoring and can detect paroxysmal atrial fibrillation which is mainly caused due to irregularity in heart rhythm.

Manufacturing Industry: Condition-Based MonitoringMachines are undoubtedly not going to last forever; they continuously undergo wear and tear and ultimately reach a point where they need to be repaired or discarded. As the manufacturing industry is one of the sectors that depend heavily on machines, they need to keep an eye on machines health strictly.

CBM is one of the most important predictive maintenance strategies that work in this case. Using machine learning techniques and combined with the information gathered from the IoT sensors, conclusions regarding the status of the equipment can be monitored.

For example, mechanical misalignment, short circuits, and wear-out conditions can be detected through this technique. This helps identify the root problem and how early a machine needs maintenance.

Furthermore, this type of automated machine learning assistance decreases the human engineering effort by 50 %, reduces the maintenance budget, and boosts the availability of machines. False alarming, which is one of the main issues of condition monitoring, is also solved by 90% with the help of machine learning models in CBM.

ConclusionNo single technology can alone bring massive success to businesses. Thus, they should be flexible enough to incorporate several technologies together. The Internet of Things (IoT), and Machine Learning are two such powerful combinations that when used correctly can scale up the growth of a business.

They are reshaping almost every industry from agriculture to IT making them more efficient, scalable, and productive.

Go here to read the rest:
Making an Impact: IoT and Machine Learning in Business - Finextra

We Need To Make Machine Learning Sustainable. Here’s How – Forbes

As machine learning progresses at breakneck speed, its intersection with sustainability is ... [+] increasingly crucial.

Irene Unceta is a professor and director of the Esade Double Degree in Business Administration & AI For Business

As machine learning progresses at breakneck speed, its intersection with sustainability is increasingly crucial. While it is clear that machine learning models will alter our lifestyles, work environments, and interactions with the world, the question of how they will impact sustainability cannot be ignored.

To understand how machine learning can contribute to creating a better, greener, more equitable world, it is crucial to assess its impact on the three pillars of sustainability: the social, the economic, and the environmental.

The social dimension

From a social standpoint, the sustainability of machine learning depends on its potential to have a positive impact on society.

Machine learning models have shown promise in this regard, for example, by helping healthcare organizations provide more accurate medical diagnoses, conduct high-precision surgeries, or design personalized treatment plans. Similarly, systems dedicated to analyzing and predicting patterns in data can potentially transform public policy, so long as they contribute to a fairer redistribution of wealth and increased social cohesion.

However, ensuring a sustainable deployment of this technology in the social dimension requires addressing challenges related to the emergence of bias and discrimination, as well as the effects of opacity.

Machine learning models trained on biased data can perpetuate and even amplify existing inequalities, leading to unfair and discriminatory outcomes. A controversial study conducted by researchers at MIT showed, for example, that commercial facial recognition software is less accurate for people with darker skin tones, especially darker women, reinforcing historical racial and gender biases.

Moreover, large, intricate models based on complex architectures, such as those of deep learning, can be opaque and difficult to understand. This lack of transparency can have a two-fold effect. On the one hand, it can lead to mistrust and lack of adoption. On the other, it conflicts with the principle of autonomy, which refers to the basic human right to be well-informed in order to make free decisions.

To promote machine learning sustainability in the social dimension, it is essential to prioritize the development of models that can be understood and that provide insights into their decision-making process. Knowing what these systems learn, however, is only the first step. To ensure fair outcomes for all members of society, regardless of background or socioeconomic status, diverse groups must be involved in these systems design and development and their ethical principles must be made explicit. Machine learning models today might not be capable of moral thinking, as Noam Chomsky recently highlighted, but their programmers should not be exempt from this obligation.

The economic dimension

Nor should the focus be solely on the social dimension. Machine learning will only be sustainable for as long as its benefits outweigh its costs from an economic perspective, too.

Machine learning models can help reduce costs, improve efficiency, and create new business opportunities. Among other things, they can be used to optimize supply chains, automate repetitive tasks in manufacturing, and provide insights into customer behavior and market trends.

Even so, the design and deployment of machine learning can be very expensive, requiring significant investments in data, hardware, and personnel. Models require extensive resources, in terms of both hardware and manpower, to develop and maintain. This makes them less accessible to small businesses and developing economies, limiting their potential impact and perpetuating economic inequality.

Addressing these issues will require evaluating the costs and benefits carefully, considering both short- and long-term costs, and balancing the trade-offs between accuracy, scalability, and cost.

But not only that. The proliferation of this technology will also have a substantial impact on the workforce. Increasing reliance on machine learning will lead to job loss in many sectors in the coming years. Efforts must be made to create new job opportunities and to ensure that workers have the necessary skills and training to transition to these new roles.

To achieve economic sustainability in machine learning, systems should be designed to augment, rather than replace, human capabilities.

The environmental dimension

Finally, machine learning has the potential to play a significant role in mitigating the impact of human activities on the environment. Unless properly designed, however, it may turn out to be a double-edged sword.

Training and running industrial machine learning models requires significant computing resources. These include large data centers and powerful GPUs, which consume a great deal of energy, as well as the production and disposal of hardware and electronic components that contribute to greenhouse gas emissions.

In 2018, DeepMind released AlphaStar, a multi-agent reinforcement-learning-based system that produced unprecedented results playing StarCraft II. While the model itself can be run on an average desktop PC, its training required the use of 16 TPUs for each of its 600 agents, running in parallel for more than 2 weeks. This raises the question of whether and to what extent these costs are justified.

To ensure environmental sustainability we should question the pertinence of training and deploying industrial machine learning applications. Decreasing their carbon footprint will require promoting more energy-efficient hardware, such as specialized chips and low-power processors, as well as dedicating efforts to developing greener algorithms that optimize energy consumption by using less data, fewer parameters, and more efficient training methods.

Machine learning may yet contribute to building a more sustainable world, but this will require a comprehensive approach that considers the complex trade-offs of developing inclusive, equitable, cost-effective, trustworthy models that have a low technical debt and do minimal environmental harm. Promoting social, economic, and environmental sustainability in machine learning models is essential to ensure that these systems support the needs of society, while minimizing any negative consequences in the long term.

Read more here:
We Need To Make Machine Learning Sustainable. Here's How - Forbes

Machine Learning Finds 140000 Future Star Forming Regions in the … – Universe Today

Our galaxy is still actively making stars. Weve known that for a while, but sometimes its hard to understand the true scale in astronomical terms. A team from Japan is trying to help with that by using a novel machine-learning technique to identify soon-to-be star-forming regions spread throughout the Milky Way. They found 140,000 of them.

The regions, known in astronomy as molecular clouds, are typically invisible to humans. However, they do emit radio waves, which can be picked up by the massive radio telescopes dotted around our planet. Unfortunately, the Milky Way is the only galaxy close enough where we can pick up those signals, and even in our home galaxy; the clouds are so far spread apart it has been challenging to capture an overall picture of them.

Therefore a team from Osaka Metropolitan University thought machine learning to the rescue. They took a data set from the Nobeyama radio telescope located in Nagano prefecture and looked for the prevalence of carbon monoxide molecules. That resulted in an astonishing 140,000 visible molecular clouds in just one quadrant of the Milky Way.

As a next step, the team looked deeper into the data and figured out how large they were, as well as where they were located in the galactic plane. Given that there are four more quadrants to explore, theres a good chance there are significantly more to find.

But to access at least two of those quadrants, they need a different radio telescope. Nobeyama is located in Japan, in the northern hemisphere, and cant see the southern sky. Plenty of radio telescopes, such as ALMA, are already online in the southern hemisphere. Some are on the horizon, such as the Square Kilometer Array that could provide an even farther look around the southern hemispheres galactic plane.The team just needs to pick which one they would like to use.

One of the great things about AI is that once you train it, which can take a significant amount of time, analyzing similar data sets is a breeze. Future work on more radio data should take advantage of that fact and allow Dr. Shinji Fujita and his team to quickly analyze even more star-forming regions. With some additional research, well be able to truly understand our galaxys creation engine sometime in the not-too-distant future.

Learn More:Osaka Metropolitan University AI draws most accurate map of star birthplaces in the GalaxyFujita et al. Distance determination of molecular clouds in the first quadrant of the Galactic plane using deep learning: I. Method and resultsUT One of the Brightest Star-Forming Regions in the Milky Way, Seen in InfraredUT Speedrunning Star Formation in the Cygnus X Region

Lead Image:Image of star-forming region Sharpless 2-106, about 2,000 light years away from Earth.Credit NASA , ESA, STScI/Aura

Like Loading...

See the rest here:
Machine Learning Finds 140000 Future Star Forming Regions in the ... - Universe Today

7 free learning resources to land top data science jobs – Cointelegraph

Data science is an exciting and rapidly growing field that involves extracting insights and knowledge from data. To land a top data science job, it is important to have a solid foundation in key data science skills, including programming, statistics, data manipulation and machine learning.

Fortunately, there are many free online learning resources available that can help you develop these skills and prepare for a career in data science. These resources include online learning platforms such as Coursera, edX and DataCamp, which offer a wide range of courses in data science and related fields.

Data science and related subjects are covered in a variety of courses on the online learning platform Coursera. These courses frequently involve subjects such as machine learning, data analysis and statistics and are instructed by academics from prestigious universities.

Here are some examples of data science courses on Coursera:

One can apply for financial aid to earn these certifications for free. However, doing a course just for certification may not land a dream job in data science.

Kaggle is a platform for data science competitions that provides a wealth of resources for learning and practicing data science skills. One can refine their skills in data analysis, machine learning and other branches of data science by participating in the platforms challenges and host of datasets.

Here are some examples of free courses available on Kaggle:

Related:9 data science project ideas for beginners

EdX is another online learning platform that offers courses in data science and related fields. Many of the courses on edX are taught by professors from top universities, and the platform offers both free and paid options for learning.

Some of the free courses on data science available on edX include:

All of these courses are free to audit, meaning that you can access all the course materials and lectures without paying a fee. Nevertheless, there will be a cost if you wish to access further course features or receive a certificate of completion. A comprehensive selection of paid courses and programs in data science, machine learning and related topics are also available on edX in addition to these courses.

DataCamp is an online learning platform that offers courses in data science, machine learning and other related fields. The platform offers interactive coding challenges and projects that can help you build real-world skills in data science.

The following courses are available for free on DataCamp:

All of these courses are free and can be accessed through DataCamps online learning platform. In addition to these courses, DataCamp also offers a wide range of paid courses and projects that cover topics such as data visualization, machine learning and data engineering.

Udacity is an online learning platform that offers courses in data science, machine learning and other related fields. The platform offers both free and paid courses, and many of the courses are taught by industry professionals.

Here are some examples of free courses on data science available on Udacity:

Related:5 high-paying careers in data science

MIT OpenCourseWare is an online repository of course materials from courses taught at the Massachusetts Institute of Technology. The platform offers a variety of courses in data science and related fields, and all of the materials are available for free.

Here are some of the free courses on data science available on MIT OpenCourseWare:

GitHub is a platform for sharing and collaborating on code, and it can be a valuable resource for learning data science skills. However, GitHub itself does not offer free courses. Instead, one can explore the many open-source data science projects that are hosted on GitHub to find out more about how data science is used in practical situations.

Scikit-learn is a popular Python library for machine learning, which provides a range of algorithms for tasks such as classification, regression and clustering, along with tools for data preprocessing, model selection and evaluation.The project is open-source and available on GitHub.

Jupyter is an open-source web application for creating and sharing interactive notebooks. Jupyter notebooks provide a way to combine code, text and multimedia content in a single document, making it easy to explore and communicate data science results.

These are just a few examples of the many open-source data science projects available on GitHub. By exploring these projects and contributing to them, one can gain valuable experience with data science tools and techniques, while also building their portfolio and demonstrating their skills to potential employers.

Read this article:
7 free learning resources to land top data science jobs - Cointelegraph

Machine Intelligence and Humanity Benefit From "Spiral" of Mutual … – Neuroscience News

Summary: Humans and computers can interact via multiple modes and channels to respectively gain wisdom and deepen intelligence.

Source: Intelligent Computing

Deyi Li from the Chinese Association for Artificial Intelligence believes that humans and machines have a mutually beneficial relationship.

His paper on machine intelligence, which was published inIntelligent Computing builds on five groundbreaking works by Schrdinger, the father of quantum mechanics, Turing, the father of artificial intelligence, and Wiener, the father of cybernetics.

Schrdinger and beyond: Machines can think and interact with the world as time goes by.

Inspired by Schrdingers book What is Life? The Physical Aspect of the Living Cell, Li believes that machines can be considered living things. That is, like humans, they decrease the amount of entropy or disorder in their environment through their interactions with the world.

The machines of the agricultural age and the industrial age existed only at the physical level, but now, in the age of intelligence, machines consist of four elements at two different levels: matter and energy at the physical level, and structure and time at the cognitive level. The machine can be the carrier of thought, and time is the foundation of machine cognition, Li explained.

Turing and beyond: Machines can think, but can they learn?

In 1936, Turing published what has been called the most influential mathematics paper, establishing the idea of a universal computing machine able to perform any conceivable computation. Such hypothetical computers are called Turing machines.

His 1950 paper Computing Machinery and Intelligence introduced what is now known as the Turing test for measuring machine intelligence, sparking a debate over whether machines can think. A proponent of thinking machines, Turing believed that a child machine could be educated and eventually achieve an adult level of intelligence.

However, given that cognition is only one part of the learning process, Li pointed out two limitations of Turings model in achieving better machine intelligence: First, the machines cognition is disconnected from its environment rather than connected to it.

This shortcoming has also been highlighted in a paper by Michael Woodridge titledWhat Is Missing from Contemporary AI? The World.Second, the machines cognition is disconnected from memory and thus cannot draw on memories of past experiences.

As a result, Li defines intelligence as the ability to engage in learning, the goal of which is to be able to explain and solve actual problems.

Wiener and beyond: Machines have behavioral intelligence.

In 1948, Wiener published a book that served as the foundation of the field of cybernetics, the study of control and communication within and between living organisms, machines and organizations.

In the wake of the success of the book, he published another, focusing on the problems of cybernetics from the perspective of sociology, suggesting ways for humans and machines to communicate and interact harmoniously.

According to Li, machines follow a control pattern similar to the human nervous system. Humans provide missions and behavioral features to machines, which must then run a complex behavior cycle regulated by a reward and punishment function to improve their abilities of perception, cognition, behavior, interaction, learning and growth.

Through iteration and interaction, the short-term memory, working memory and long-term memory of the machines change, embodying intelligence through automatic control.

In essence, control is the use of negative feedback to reduce entropy and ensure the stability of the embodied behavioral intelligence of a machine, Li concluded.

The strength of contemporary machines is deep learning, which still requires human input, but leverages the ability of devices to use brute force methods of solving problems with insights gleaned directly from big data.

A joint future: from learning to creating

Machine intelligence cannot work in isolation; it requires human interaction. Furthermore, machine intelligence is inseparable from language, because humans use programming languages to control machine behavior.

The impressive performance of ChatGPT, a chatbot showcasing recent advances in natural language processing, proves that machines are now capable of internalizing human language patterns and producing appropriate example texts, given the appropriate context and goal.

Since AI-generated texts are increasingly indistinguishable from human-written texts, some are saying that AI writing tools have passed the Turing test. Such declarations provoke both admiration and alarm.

Li is among the optimists who envision artificial intelligence in a natural balance with human civilization. He believes, from a physics perspective, that cognition is based on a combination of matter, energy, structure and time, which he calls hard-structured ware, and expressed through information, which he calls soft-structured ware.

He concludes that humans and machines can interact through multiple channels and modes to gain wisdom and intelligence, respectively. Despite their different endowments in thinking and creativity, this interaction allows humans and machines to benefit from each others strengths.

Author: Xuwen LiuSource: Intelligent ComputingContact: Xuwen Liu Intelligent ComputingImage: The image is credited to Deyi Li

Original Research: Open access.Cognitive PhysicsThe Enlightenment by Schrdinger, Turing, and Wiener and Beyond by Deyi Li. Intelligent Computing

Abstract

Cognitive PhysicsThe Enlightenment by Schrdinger, Turing, and Wiener and Beyond

In the first half of the 20th century, 5 classic articles were written by 3 outstanding scholars, namely, Wiener (1894 to 1964), the father of cybernetics, Schrdinger (1887 to 1961), the father of quantum mechanics, and Turing (1912 to 1954), the father of artificial intelligence.

The articles discuss the concepts such as computability, life, machine, control, and artificial intelligence, establishing a solid foundation for the intelligence of machines (how machines can recognize as humans do?) and its future development.

Read more from the original source:
Machine Intelligence and Humanity Benefit From "Spiral" of Mutual ... - Neuroscience News