Archive for the ‘Machine Learning’ Category

Using machine learning to tame plasma in fusion reactors – Advanced Science News

For fusion reactions to become practical, parameters such as plasma density and shape must be monitored in real time and impending disruptions responded to instantly.

Nuclear fusion is widely regarded as one of the most promising sources of clean and sustainable energy of the future. In a fusion reaction, two light atomic nuclei combine to form another, whose mass is less than the total mass of the original pair, and according to Einsteins famous formula E = mc2, this mass difference gets transformed into energy that can be utilized.

The problem with this source of energy is that for positively charged nuclei to fuse, they have to overcome the electrical repulsion between them. For this, the velocity of colliding nuclei must be very high, which is achieved by heating the substance in which the reaction takes place to an enormous temperature, at least tens of millions of degrees Kelvin.

Of course, no material can withstand contact with matter at such temperature, so in all prototype fusion reactors, a magnetic field is used to contain the hot plasma, limiting its movement and preventing it from coming into contact with the walls of the reactor. However, in a hot plasma instabilities constantly arise, which can force it to leave the region of the magnetic container and collide with the walls of the reactor, damaging them. Such contacts also guarantee the cooling of the plasma and the termination of the fusion reaction.

In order to prevent these violent plasma disruptions, it is necessary to monitor plasma parameters such as its density and shape in real time and respond instantly to impending disruptions. To achieve this, a team of American and British scientists led by William Tang of Princeton University, has developed a machine learning-based software that can predict the disruptions and analyze the physical conditions which result in them.

In their work, the physicists used a large amount of data from the British JET facility and the American DIII-D machine, which are tokamaks, fusion reactors in which the plasma has the shape of a donut. To be more precise, the researchers used some of the data they had on the state of the plasma in the reactors during their operation to train the program. This training allows the software to to predict when a disruption would occur. The accuracy of these predictions could then be tested using real world data not used in the training set.

The team not only trained their software to correctly predict the disruptions, but also to analyze the physical processes occurring in the plasma that led to these events. This property of the algorithm is essential, since in the operation of a real fusion reactor it is important not only to understand that a disruption is approaching, but also to be able to prevent it by changing the parameters of the plasma in the reactor within milliseconds.

With a larger dataset and more powerful supercomputers, such as those currently being built at Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, and Argonne National Laboratory, the researchers hope they can make their algorithm even more sensitive to the processes occurring in the plasma, and hence more accurately predict and respond to impending disruptions.

They expect that the software they have developed will be implemented on the current prototype tokamaks, whose data they used in their study, as well as on future more powerful machines such as ITER, currently under construction in France. If this happens, then this may lead to earlier stable energy production from fusion reactions.

References: William Tang et al, Implementation of AI/DEEP learning disruption predictor into a plasma control system, Contributions to Plasma Physics (2023), DOI: 10.1002/ctpp.202200095.

Julian Kates-Harbeck, et al, Predicting disruptive instabilities in controlled fusion plasmas through deep learning, Nature (2019), DOI: 10.1038/s41586-019-1116-4.

Feature image credit: TheDigitalArtist on Pixabay

Read more:
Using machine learning to tame plasma in fusion reactors - Advanced Science News

AI-Powered Government: The Role of Machine Learning in … – Fagen wasanni

Exploring the Future: AI-Powered Government and the Role of Machine Learning in Streamlining Public Services

As we stand on the precipice of a new era, the role of artificial intelligence (AI) in shaping our future cannot be overstated. One area where AI is poised to make a significant impact is in the realm of public services, where machine learning technologies are being leveraged to streamline operations and enhance efficiency. This is the dawn of the AI-powered government, a concept that is rapidly gaining traction worldwide.

Machine learning, a subset of AI, involves the use of algorithms that improve automatically through experience. It is this ability to learn and adapt that makes machine learning a powerful tool for governments. By analyzing vast amounts of data, machine learning can identify patterns and trends that would be impossible for humans to discern. This can lead to more informed decision-making and more effective policies.

One of the key areas where machine learning can be applied is in predictive analytics. For instance, by analyzing historical data, machine learning algorithms can predict future trends in areas such as crime rates, disease outbreaks, or traffic congestion. This can enable governments to allocate resources more effectively and take proactive measures to address potential issues.

Moreover, machine learning can also be used to automate routine tasks, freeing up government employees to focus on more complex issues. For example, machine learning algorithms can be used to sort through and categorize large volumes of data, such as applications for government services or public feedback. This can significantly reduce processing times and improve the efficiency of public services.

In addition, machine learning can also play a crucial role in enhancing transparency and accountability in government operations. By analyzing data on government spending and performance, machine learning algorithms can identify areas of inefficiency or potential corruption. This can help to ensure that public funds are being used effectively and that government officials are held accountable for their actions.

However, the adoption of machine learning in government also raises important questions about privacy and security. Governments must ensure that the use of AI technologies does not infringe upon citizens rights to privacy and that adequate measures are in place to protect sensitive data from cyber threats.

Furthermore, there is also the issue of the digital divide. While AI technologies can greatly enhance the efficiency of public services, they also require a certain level of digital literacy to use effectively. Governments must therefore also invest in digital education and infrastructure to ensure that all citizens can benefit from these technologies.

In conclusion, the advent of the AI-powered government presents both opportunities and challenges. Machine learning technologies have the potential to revolutionize public services, making them more efficient, transparent, and responsive. However, governments must also navigate the complex issues of privacy, security, and digital inequality. As we move forward into this new era, it is clear that the role of machine learning in streamlining public services will be a key area of focus.

Read this article:
AI-Powered Government: The Role of Machine Learning in ... - Fagen wasanni

Navigating the New Frontier: Growth Opportunities in AI-Powered … – Fagen wasanni

Exploring the Uncharted Territory: Growth Prospects in AI-Driven IoT and Machine Learning Security Systems

The advent of artificial intelligence (AI), Internet of Things (IoT), and machine learning technologies has ushered in a new era of innovation, particularly in the realm of security systems. As we navigate this new frontier, it is becoming increasingly clear that these technologies present significant growth opportunities for businesses and industries worldwide.

AI-powered IoT and machine learning security systems are at the forefront of this technological revolution. These systems leverage the power of AI and machine learning to analyze vast amounts of data, identify patterns, and make predictions, thereby enhancing security and efficiency. The integration of AI and IoT in security systems is not just a trend; its a paradigm shift that is reshaping the security landscape.

The growth prospects in this uncharted territory are immense. According to a report by MarketsandMarkets, the global AI in IoT market is expected to grow from USD 5.1 billion in 2019 to USD 16.2 billion by 2024, at a Compound Annual Growth Rate (CAGR) of 26.0% during the forecast period. This growth is driven by the increasing need for efficient and effective security solutions, the proliferation of IoT devices, and advancements in AI and machine learning technologies.

AI-driven IoT security systems offer numerous benefits that contribute to their growing popularity. They provide real-time monitoring and detection of security threats, enabling swift response and mitigation. They also offer predictive analytics capabilities, allowing for proactive threat management. Furthermore, these systems can adapt and learn from new situations, enhancing their performance over time.

Machine learning, a subset of AI, plays a crucial role in these security systems. It enables the systems to learn from data, identify patterns, and make decisions with minimal human intervention. This not only improves the accuracy and efficiency of the systems but also frees up human resources for more strategic tasks.

However, as we explore this new frontier, its important to acknowledge the challenges that come with it. The integration of AI and IoT in security systems raises concerns about data privacy and security. Theres also the issue of the digital divide, as not all businesses and individuals have equal access to these advanced technologies. Moreover, theres a need for skilled professionals who can develop, implement, and manage these systems.

Despite these challenges, the potential of AI-powered IoT and machine learning security systems is undeniable. They offer a new approach to security that is proactive, intelligent, and adaptable. As these technologies continue to evolve, they are expected to drive significant growth and innovation in the security industry.

In conclusion, the integration of AI, IoT, and machine learning in security systems is a new frontier with vast growth opportunities. Its an exciting time for businesses and industries as they navigate this uncharted territory. While there are challenges to overcome, the potential benefits of these technologies far outweigh the risks. As we continue to explore this new frontier, its clear that AI-powered IoT and machine learning security systems are not just the future of security; they are the present, reshaping the security landscape as we know it.

Excerpt from:
Navigating the New Frontier: Growth Opportunities in AI-Powered ... - Fagen wasanni

Research Rooted in Machine Learning Challenges Conventional … – National Institute of Justice

Researchers have developed a new analytical method to better understand how individuals move toward violent extremism.

Using machine learning, a form of artificial intelligence, the method reveals clusters of traits associated with possible pathways to terrorist acts. The resource may improve our understanding of how an individual becomes radicalized toward extremist violence.

The report on a scientific study that deploys those tools and blends elements of data science, sociology, and criminology is calling into question some common assumptions about violent extremism and the homegrown individuals who are motivated to engage in behaviors supporting violent jihadist ideologies. See Table 1.

Table 1 shows select key insights from the project aimed at developing a new computational methodology that can mine multiple large databases to screen for behaviors associated with violent extremism.

The study departs from the research communitys common use of demographic profiles of extremist individuals to predict violent intentions. Profiling runs the risk of relying on ethnic stereotypes in extremism studies and law enforcement practices, particularly with respect to American Muslims. According to the researchers, the method isolated the behaviors associated with potential terrorist trajectories, after being trained with thousands of text data coded by researchers.

Machine learning is an application of artificial intelligence that uses existing data to make predictions or classifications about individuals, actions, or events. The machine learns by observing many examples until it can statistically replicate them.

Researchers scanned large datasets to spot traits or experiences that are collectively associated with terrorist trajectories employing a process that blends machine learning (see What Is Machine Learning?), and an evidence-based behavioral model of radicalization associated with violence and other terrorism-related activities.

The machine-learning computational method analyzes, while learning from, copious data to isolate behaviors associated with potential terrorist trajectories.

The graph component depicts clusters of behavioral indicators that reveal those trajectories. The datasets generating those indicators include investigator notes, suspicious activity reports, and shared information. See "What Do We Mean by Graph? Defining It in Context."

This tool for understanding violent extremism is the work of Colorado State University and Brandeis University investigators, supported by the National Institute of Justice. The tool aims to isolate somewhat predictable radicalization trajectories of individuals or groups who may be moving toward violent extremism.

A key element of the work was the development of a Human-in-the-Loop system, which introduces a researcher into the data analysis. Because the data are so complex, the researcher mitigates difficulties by assisting the algorithm at key points during its training. As part of the process, the researcher writes and rewrites an algorithm to pick up key words, phrases, or sentences in texts. Then the researcher sorts those pieces of text with other text segments known to be associated with radicalization trajectories.

The Human-in-the-Loop factor is designed to help researchers code data faster, build toward a law enforcement intelligence capable of capturing key indicators, and enable researchers to transform textual data into a graph database. The system relies on a software-based framework designed to help overcome challenges posed by massive data volumes and complex extremist behaviors.

The research stems from the premise that radicalization is the product of deepening engagements that can be observed in changing behaviors. This concept is based on researchers observations that the radicalization process occurs incrementally.

The radicalization trajectory concept suggests that a linear pathway exists from an individual entertaining extremist ideas to ultimately taking extremist action marked by violence in the name of ideology.

The research findings validated that premise.

The researchers used 24 different behavioral indicators to search databases for evidence of growing extremism. Some examples of indicators are desire for action, issuance of threats, ideological rebellion, and steps toward violence. (See Figure 1 for an example of a set of cues, or behaviors, that the researchers associate with one behavioral indicator associated with planning a trip abroad.)

Source: Dynamic, Graph-Based Risk Assessments for the Detection of Violent Extremist Radicalization Trajectories Using Large Scale Social and Behavioral Data, by A. Jayasumana and J. Klausen, Table 5, p. 23.

Because violent extremism remains a relatively rare phenomenon, data on known individuals who committed terrorist events was mined to identify cues representing behavioral extremist trajectories. To that end, researchers collected three types of data:

The sources of collected data were public documents ranging from news articles to court documents, including indictments and affidavits supporting complaints.

Of the 1,241 individuals studied, the researchers reported that 421 engaged in domestic terrorist violence, 390 became foreign fighters, and 268 became both foreign fighters and individuals engaged in domestic terrorism. A minority (162) were convicted of nonviolent terrorism-related offenses.

Researchers analyzed time-stamped behavioral data such as travel abroad, a declaration of allegiance, information seeking, or seeking a new religious authority using graph techniques to assess the order of subjects behavioral changes and most common pathways leading to terrorism-related action. See the sidebar What do we mean by graph? Defining it in context.

The researchers made several notable findings beyond those presented in Table 1.

Although researchers found that terrorist crimes are often the work of older (at least 25 years old, on average) individuals, the agecrime relationship varied across types of terrorist offenses. They found that, on average, people who committed nonviolent extremist acts were 10 years older than those who became foreign fighters. Although younger men (median age 23) are more likely to take part in insurgencies abroad, slightly older men (median ages 25-26) who have adopted jihadist ideologies are more likely to engage in violent domestic terrorist attacks. Individuals who did something violent at home were, on average, four years older than foreign fighters.

Researchers also found that men and a few women at any age may engage in nonviolent criminal support for terrorism. Also, men are six times more likely than women to commit violent offenses, both in the United States and abroad.

According to this study, individuals who have adopted jihadist ideologies and who are immigrants are more likely than those who are homegrown to engage in domestic extremist violence.

The dataset, comprising more than 1,200 individuals who had adopted jihadist ideologies, was used to track radicalization trajectories. It was limited by the availability of sufficiently detailed text sources, which introduced an element of bias. Much of the public data on terrorism come from prosecutions, but not all terrorism-related offenses are prosecuted in state or federal U.S. courts. Some of the subjects died while fighting for foreign terror organizations, which limited the available information on them.

Although data from public documents may be freely shared, the researchers noted that research based on public sources can be extremely time consuming.

Often public education efforts on anti-terrorism take place at schools where children learn about recruitment tactics by extremist groups and warning signs of growing extremism. However, the study found that more than half of those who commit extremist violent acts in the United States are older than 23 and typically not in school. This suggests that anti-terrorism education efforts need to expand beyond school settings.

By using machine learning to identify persons on a trajectory toward extremist violence, the research supports a further move away from relying on user profiles of violent extremists and toward the use of behavioral indicators.

The research described in this article was funded by NIJ award 2017-ZA-CX-0002, awarded to Colorado State University. This article is based on the grantee report Dynamic, Graph-Based Risk Assessments for the Detection of Violent Extremist Radicalization Trajectories Using Large Scale Social and Behavioral Data, by A. Jayasumana and J. Klausen.

A graph, in the context of this research, is a mathematical representation of a collection of connections (called edges) between things (called nodes). Examples would be a social network or a crime network, or points on a map with paths connecting the points. The concept is analogous to cities, and roads or flights paths connecting them, on a map. The researchers in this violent extremism study isolated clusters of traits representing a more likely pathway to violent extremism. The concept is similar to a map app choosing roads that are least congested (allowing for most traffic) between two points. Graphs in this sense can be quite visual and make good conventional graphics.

Return to text

Link:
Research Rooted in Machine Learning Challenges Conventional ... - National Institute of Justice

The Scamdemic: Can Machine Learning Turn the Tide? – CDOTrends

The worldwide digital space was gripped by an unprecedented surge in online scams and phishing attacks in 2022. Cybersecurity company Group-IB unveiled an alarming analysis detailing this rising threat.

Their recently launched study showed that the number of scam resources created per brand soared by 162% globally, and even more drastically in the Asia-Pacific region, with a whopping increase of 211% from 2021. The report also disclosed a more than three-fold increase in detected phishing websites over the last year.

These findings underscore the persistent cyber threat landscape, shedding light on a cyber menace that cost more than USD55 billion in damages last year, according to the Global Anti Scam Alliance and ScamAdviser's 2022 Global State of Scams Report. With these alarming trends, the scamdemic shows no signs of slowing down.

Scam campaigns are not just affecting more brands each year; the impact that each individual brand faces is growing larger. Scammers are using a vast amount of domains and social media accounts to not only reach a greater number of potential victims but also evade counteraction, explained Afiq Sasman, head of the digital risk protection analytics team in the Asia Pacific at Group-IB.

The rise in scams was attributed to increased social media use and the growing automation of scam processes. Social media platforms often serve as the first point of contact between scammers and potential victims, with 58% of scam resources created on such platforms in the Asia-Pacific region last year. Group-IB's Digital Risk Protection analysts found that more than 80% of operations are now automated in scams like Classiscam.

Using automation and AI-driven text generators by cybercriminals to craft convincing scam and phishing campaigns poses an escalating threat. Such advancements allow cybercriminals to scale operations and provide increased security within their illicit ecosystems.

The study also highlighted the uptick in scam resources hosted on the .tk domain, accounting for 38.8% of all scam resources examined by Group-IB in the second half of 2022. This development reveals the increasing impact of automation in the scam industry, as affiliate programs automatically generate links on this domain zone.

The research underscores the urgent need for robust and innovative cybersecurity measures. By leveraging advanced technologies such as neural networks and machine learning, organizations can monitor millions of online resources to guard against external digital risks, protecting their intellectual property and brand identity. Only through such proactive measures can we hope to turn the tide against the rising tide of this digital 'scamdemic.

Image credit: iStockphoto/Dragon Claws

Excerpt from:
The Scamdemic: Can Machine Learning Turn the Tide? - CDOTrends