Archive for the ‘Quantum Computing’ Category

The Well-matched Combo of Quantum Computing and Machine Learning – Analytics Insight

The pace of improvement in quantum computing mirrors the fast advances made in AI and machine learning. It is normal to ask whether quantum technologies could boost learning algorithms: this field of inquiry is called quantum-improved machine learning.

Quantum computers are gadgets that work dependent on principles from quantum physics. The computers that we at present use are constructed utilizing transistors and the information is stored as double 0 and 1. Quantum computers are manufactured utilizing subatomic particles called quantum bits, qubits for short, which can be in numerous states simultaneously. The principal advantage of quantum computers is that they can perform exceptionally complex tasks at supersonic velocities. In this way, they take care of issues that are not presently feasible.

The most significant advantage of quantum computers is the speed at which it can take care of complex issues. While theyre lightning speedy at what they do, they dont give abilities to take care of issues from undecidable or NP-Hard problem classes. There is a problem set that quantum computing will have the option to explain, anyway, its not applicable for all computing problems.

Ordinarily, the issue set that quantum computers are acceptable at solving includes number or data crunching with an immense amount of inputs, for example, complex optimisation problems and communication systems analysis problemscalculations that would normally take supercomputers days, years, even billions of years to brute force.

The application that is routinely mentioned as an instance that quantum computers will have the option to immediately solve is solid RSA encryption. A recent report by the Microsoft Quantum Team recommends this could well be the situation, figuring that itd be feasible with around a 2330 qubit quantum computer.

Streamlining applications leading the pack makes sense well since theyre at present to a great extent illuminated utilizing brute force and raw computing power. If quantum computers can rapidly observe all the potential solutions, an ideal solution can become obvious all the more rapidly. Streamlining stands apart on the grounds that its significantly more natural and simpler to get a hold on.

The community of people who can fuse optimization and robust optimization is a whole lot bigger. The machine learning community, the coinciding between the innovation and the requirements are technical; theyre just pertinent to analysts. Whats more, theres a much smaller network of statisticians on the planet than there are of developers.

Specifically, the unpredictability of fusing quantum computing into the machine learning workflow presents an impediment. For machine learning professionals and analysts, its very easy to make sense of how to program the system. Fitting that into a machine learning workflow is all the more challenging since machine learning programs are getting very complex. However, teams in the past have published a lot of research on the most proficient method to consolidate it in a training workflow that makes sense.

Undoubtedly, ML experts at present need another person to deal with the quantum computing part: Machine learning experts are searching for another person to do the legwork of building the systems up to the expansions and demonstrating that it can fit.

In any case, the intersection of these two fields goes much further than that, and its not simply AI applications that can benefit. There is a meeting area where quantum computers perform machine learning algorithms and customary machine learning strategies are utilized to survey the quantum computers. This region of research is creating at such bursting speeds that it has produced a whole new field called Quantum Machine Learning.

This interdisciplinary field is incredibly new, however. Recent work has created quantum algorithms that could go about as the building blocks of machine learning programs, yet the hardware and programming difficulties are as yet significant and the development of fully functional quantum computers is still far off.

The future of AI sped along by quantum computing looks splendid, with real-time human-imitable practices right around an inescapable result. Quantum computing will be capable of taking care of complex AI issues and acquiring multiple solutions for complex issues all the while. This will bring about artificial intelligence all the more effectively performing complex tasks in human-like ways. Likewise, robots that can settle on optimised decisions in real-time in practical circumstances will be conceivable once we can utilize quantum computers dependent on Artificial Intelligence.

How away will this future be? Indeed, considering just a bunch of the worlds top organizations and colleges as of now are growing (genuinely immense) quantum computers that right now do not have the processing power required, having a multitude of robots mirroring humans running about is presumably a reasonable way off, which may comfort a few people, and disappoint others. Building only one, however? Perhaps not so far away.

Quantum computing and machine learning are incredibly well matched. The features the innovation has and the requirements of the field are extremely close. For machine learning, its important for what you have to do. Its difficult to reproduce that with a traditional computer and you get it locally from the quantum computer. So those features cant be unintentional. Its simply that it will require some time for the people to locate the correct techniques for integrating it and afterwards for the innovation to embed into that space productively.

Original post:
The Well-matched Combo of Quantum Computing and Machine Learning - Analytics Insight

Picking up the quantum technology baton – The Hindu

In the Budget 2020 speech, Finance Minister Nirmala Sitharaman made a welcome announcement for Indian science over the next five years she proposed spending 8,000 crore (~ $1.2 billion) on a National Mission on Quantum Technologies and Applications. This promises to catapult India into the midst of the second quantum revolution, a major scientific effort that is being pursued by the United States, Europe, China and others. In this article we describe the scientific seeds of this mission, the promise of quantum technology and some critical constraints on its success that can be lifted with some imagination on the part of Indian scientific institutions and, crucially, some strategic support from Indian industry and philanthropy.

Quantum mechanics was developed in the early 20th century to describe nature in the small at the scale of atoms and elementary particles. For over a century it has provided the foundations of our understanding of the physical world, including the interaction of light and matter, and led to ubiquitous inventions such as lasers and semiconductor transistors. Despite a century of research, the quantum world still remains mysterious and far removed from our experiences based on everyday life. A second revolution is currently under way with the goal of putting our growing understanding of these mysteries to use by actually controlling nature and harnessing the benefits of the weird and wondrous properties of quantum mechanics. One of the most striking of these is the tremendous computing power of quantum computers, whose actual experimental realisation is one of the great challenges of our times. The announcement by Google, in October 2019, where they claimed to have demonstrated the so-called quantum supremacy, is one of the first steps towards this goal.

Besides computing, exploring the quantum world promises other dramatic applications including the creation of novel materials, enhanced metrology, secure communication, to name just a few. Some of these are already around the corner. For example, China recently demonstrated secure quantum communication links between terrestrial stations and satellites. And computer scientists are working towards deploying schemes for post-quantum cryptography clever schemes by which existing computers can keep communication secure even against quantum computers of the future. Beyond these applications, some of the deepest foundational questions in physics and computer science are being driven by quantum information science. This includes subjects such as quantum gravity and black holes.

Pursuing these challenges will require an unprecedented collaboration between physicists (both experimentalists and theorists), computer scientists, material scientists and engineers. On the experimental front, the challenge lies in harnessing the weird and wonderful properties of quantum superposition and entanglement in a highly controlled manner by building a system composed of carefully designed building blocks called quantum bits or qubits. These qubits tend to be very fragile and lose their quantumness if not controlled properly, and a careful choice of materials, design and engineering is required to get them to work. On the theoretical front lies the challenge of creating the algorithms and applications for quantum computers. These projects will also place new demands on classical control hardware as well as software platforms.

Globally, research in this area is about two decades old, but in India, serious experimental work has been under way for only about five years, and in a handful of locations. What are the constraints on Indian progress in this field? So far we have been plagued by a lack of sufficient resources, high quality manpower, timeliness and flexibility. The new announcement in the Budget would greatly help fix the resource problem but high quality manpower is in global demand. In a fast moving field like this, timeliness is everything delayed funding by even one year is an enormous hit.

A previous programme called Quantum Enabled Science and Technology has just been fully rolled out, more than two years after the call for proposals. Nevertheless, one has to laud the governments announcement of this new mission on a massive scale and on a par with similar programmes announced recently by the United States and Europe. This is indeed unprecedented, and for the most part it is now up to the government, its partner institutions and the scientific community to work out details of the mission and roll it out quickly.

But there are some limits that come from how the government must do business with public funds. Here, private funding, both via industry and philanthropy, can play an outsized role even with much smaller amounts. For example, unrestricted funds that can be used to attract and retain high quality manpower and to build international networks all at short notice can and will make an enormous difference to the success of this enterprise. This is the most effective way (as China and Singapore discovered) to catch up scientifically with the international community, while quickly creating a vibrant intellectual environment to help attract top researchers.

Further, connections with Indian industry from the start would also help quantum technologies become commercialised successfully, allowing Indian industry to benefit from the quantum revolution. We must encourage industrial houses and strategic philanthropists to take an interest and reach out to Indian institutions with an existing presence in this emerging field. As two of us can personally attest, the Tata Institute of Fundamental Research (TIFR), home to Indias first superconducting quantum computing lab, would be delighted to engage.

R. Vijayaraghavan is Associate Professor of Physics at the Tata Institute of Fundamental Research and leads its experimental quantum computing effort; Shivaji Sondhi is Professor of Physics at Princeton University and has briefed the PM-STIAC on the challenges of quantum science and technology development; Sandip Trivedi, a Theoretical Physicist, is Distinguished Professor and Director of the Tata Institute of Fundamental Research; Umesh Vazirani is Professor of Computer Science and Director, Berkeley Quantum Information and Computation Center and has briefed the PM-STIAC on the challenges of quantum science and technology development

You have reached your limit for free articles this month.

Register to The Hindu for free and get unlimited access for 30 days.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Not convinced? Know why you should pay for news.

*Our Digital Subscription plans do not currently include the e-paper ,crossword, iPhone, iPad mobile applications and print. Our plans enhance your reading experience.

Read the original:
Picking up the quantum technology baton - The Hindu

What Is Moore’s Lawand Did it Inspire the Computer Age? – zocalopublicsquare.org

by Rachel Jones|March22,2020

In the last half-century, and especially in the last decade, computers have given us the ability to act and interact in progressively faster and more frictionless ways. Consider the now-ubiquitous smartphone, whose internal processor takes just a millisecond to convert a movement of your finger or thumb to a visual change on your screen. This speed has benefits (in 2020, theres a virtual library of information online) as well as disadvantages (your gaffe can go viral in seconds).

What made the smartphoneand the rest of our unfolding digital transformationpossible? Many point to a prediction in April 1965, published in a then-little-read article toward the back end of the trade paper Electronics. The piece, written by a young chemist named Gordon Moore, outlined in technical terms how quickly the technology behind computer chips might develop and, by implication, make its way into our lives. Its been 55 years since the articles publication, and its worth revisiting its original predictionnow known as Moores Law.

If you ask people today what Moores Law is, theyll often say it predicts that every 18 months, engineers will be able to come up with ways to double the number of transistors they can squeeze onto a tiny computer chip, thus doubling its processing power. Its a curious aspect of the law that this is not what Moore actually said, but he did predict consistent improvement in processing technology. Moreover, the world he anticipated did take shape, with his own work as founder of the chipmaker Intel creating much of the momentum necessary to turn his law into a self-fulfilling prophecy.

Initially, Moore had few notions of changing the world. Early in life, he discovered a love for chemistryand though he was kept back at school for his inarticulate style, he excelled at practical activities, making bombs and rockets in a home-based laboratory. He went on to study chemistry at UC Berkeley under two Nobel laureates, and earned a Ph.D. at the California Institute of Technology in 1954.

Moores career trajectory coincided with the rise of the transistor, a device made of semiconductor material that can regulate electrical current flows and act as a switch or gate for electronic signals. As far back as the 1920s, physicists had proposed making transistors as a way to improve on the unreliable, power-hungry vacuum tubes that helped amplify signals on telephone lines, and that would be used in the thousands in computers such as ENIAC and Colossus. In 1939, William Shockley, a young Bell Labs researcher, revived the idea of the transistor and tried to fabricate a device; despite several failures, he continued on and in 1947 he and two colleagues succeeded in making the worlds first working transistor (for which they shared a Nobel Prize in Physics). In 1953, British scientists used transistors to build a computer, and Fortune declared it The Year of the Transistor.

In 1955, Shockley moved to Mountain View, California, to be near his mother. He opened a semiconductor laboratory and picked a handful of young scientists to join him, including Moore and his Intel co-founder, Bob Noyce. The launch of the Sputnik satellite in 1957 and the escalation of the Cold War created a boom within a boom: Moore and seven colleagues, including Noyce, broke away from Shockley in a group quickly branded The Traitorous Eight, forming the seminal start-up Fairchild Semiconductor. They planned to make silicon transistors, which promised greater robustness, miniaturization and lower power usage, so essential for computers guiding missiles and satellites.

Our curiosity was similar, but not our approach. Noyce liked things that flew. I liked things that blew up, said Gordon Moore (left) with Robert Noyce.Courtesy of Intel Free Press.

Developing the core manufacturing technology was a seat-of-the-pants adventure in which Moore played a central role. In March 1958, Fairchild received an order from IBM for 100 mesa transistors priced at $150 each. Mesas, made on 1-inch silicon wafers, were so named because their profiles resembled the flat-topped mesa formations of the American Southwest. Moores responsibility was figuring out how to fabricate them reliably, which involved a complex chemical ballet and a considerable amount of thrift and improvisation. Unable to buy appropriate furnaces, Moore relied on glass-blowing skills to create gas-handling systems, assembled on cobbled-together aqua blue kitchen cabinets and Formica countertops. (Real lab furniture was as expensive as heck, he remarked.) Delivery solutions were similarly no-frills: Fairchild sent mesa transistors to IBM in a Brillo box from a local grocery store.

The mesa transistor was successful, but the companys new planar transistor (named for its flat topography) was a game-changer, bringing more stability and better performance. Another key development was the step to connect transistors by making all components of a complete circuit within a single piece of silicon, paving the way for the first commercial integrated circuits, or microchips. Everyone wanted miniaturized circuitrythe obstacle to greater computing power was its need for more components and interconnections, which increased the possibilities for failure. Noyce grasped a solution: why not leave transistors together in a wafer and interconnect them there, then detach the set as a single unit? Such microchips could be smaller, faster and cheaper than transistors manufactured individually and connected to each other afterward. As early as 1959, Moore proposed that sets of these components will be able to replace 90 percent of all circuitry in digital computers.

In the 1970s, seeing progress continue, Moore grew bolder, telling audiences that silicon electronics would constitute a major revolution in the history of mankind, as important as the Industrial Revolution.

Six years later, in 1965, when he wrote his now-famous article in ElectronicsCramming More Components onto Integrated Circuitspersonal computers were still a decade away. Moore, who had seen the number of elements on a chip go from one, to eight, to 60, hinted at how integrated functions would broaden [electronics] scope beyond [his] imagination and at the major impact the changes would bring, but saw his analysis as distilling merely a trend in technology that would make everything cheaper. Nevertheless, his analysis was rigorous. Doubling the number of components on an integrated circuit each year would steadily increase performance and decrease cost, which wouldas Moore put it 10 years laterextend the utility of digital electronics more broadly in society.

As chemical printing continued to evolve, the economics of microchips would continue to improve, and these more complex chips would provide the cheapest electronics. Thus, an electronics-based revolution could depend on existing silicon technology, rather than some new invention. By 1970, Moore asserted, the transistor that could be made most cheaply would be on a microchip 30 times more complex than one of 1965.

In 1968, Moore left Fairchild and joined Noyce to found Intel, with the aim of putting cleverness back into processing silicon. In 1975, he reviewed his original extrapolation. Chips introduced until that point had followed the trend he predicted, but engineers were reaching the limits for circuit and device cleverness. Moore now proposed a doubling about every two years.

The analysis in Electronics was becoming known as Moores Law. Having correctly observed the potential for exponential growth, Moore overcame his personal dislike of the spotlight by travelling widely to talk about his idea, taking every opportunity to persuade others. After all, the fulfilment of Moores Law would be as much social as technical, relying on widespread acceptance: industry needed to invest to develop the technology, manufacturers needed to put microchips into their products, consumers needed to buy and use electronic devices and functions, and researchers and engineers needed to invent advances to extend Moores Law.

In the 1970s, seeing progress continue, Moore grew bolder, telling audiences that silicon electronics would constitute a major revolution in the history of mankind, as important as the Industrial Revolution. He was so confident in his vision that he told a journalist that students whod made headlines getting kicked off campuses (kids with the long hair and beards) were not the ones to watch: instead, he pronounced, we are really the revolutionaries in the world today. In front of a crowd, he pointed out that if the auto industry made progress at the same rate as silicon microelectronics, it would be more expensive to park your car downtown for the night than to buy a new Rolls Royce. And, he recalled years later, one of the members of the audience pointed out, yeah, but itd only be 2-inches long and a half-inch high; it wouldnt be much good for your commute.

The rest is history. For more than three decades, the New York Times pointed out in 2003, Moores Law has accurately predicted the accelerating power and plummeting cost of computing. Because of the exponential nature of Moores prediction, each change has arrived faster and more furiously. Its curve, shallow at first (though spawning the birth of the microprocessor, digital calculator, personal computer and internet along the way) has, since 2005, gone almost straight up in hockey stick style.

Despite the changes weve all witnessed, Moores Law is still widely misunderstood, even in tech circles. [Its] only 11 words long but most people manage to mangle it, said one report. Moores 1965 article is a sophisticated piece of analysis but many prefer to interpret it more vaguely: The definition of Moores Law has come to refer to almost anything related to the semiconductor industry that when plotted on semi-log paper approximates a straight line, noted its originator, dryly.

Up to April 2002, Intels website noted that Moore predicted that the number of transistors per integrated circuit would double every 18 months, even though Moore had pointed out that he never said 18 months.

Why did 18 months stick? Perhaps because a projection by an Intel colleague in 1975 led to a conflation of transistor count and doubling of performance; perhaps because this timescale appeared in an influential technology column in 1992, as the modern configuration of Silicon Valley was formingperhaps because that speed felt more accurate to the semiconductor industry.

During the technology bust of the early 2000s, people began to speculate about the death of Moores Law. Others suggested it would peter out because people would drop their computer fixations to spend less time at work and more with their families, or because Silicon Valleys obsession with it was unhealthy for business strategy. In 2007, the year the smartphone launched, Moore pointed out that we make more transistors per year than the number of printed characters in all the newspapers, magazines, books, photocopies, and computer printouts. But he recognized exponential growth could not continue forever; he knew the physical and financial constraints on shrinking the size of chip components.

When people in industry circles describe Moores Law as a dictatethe law by which the industry lives or dies, it is more evidence of the laws power within Silicon Valley culture rather than its actual predictive accuracy. As the essayist Ilkka Tuomi observed in The Lives and Death of Moores Law, Moores Law became an increasingly misleading predictor of future developments that people understood to be something more like a rule-of-thumb than a deterministic natural law. In fact, Tuomi speculated, the very slipperiness of Moores Law might have accounted for its popularity. To an extent, tech people could pick and choose how they interpreted the dictum to suit their business needs.

Today, Moores Law continues to thrive in the smartphone space, having put some 8.5 billion transistors into a single phone that can fit in our pockets. The law may now be, in the words of one commentator, more a challenge to the industry than an axiom for how chipmaking works, but for what began as a 10-year forecast, it has had an astonishing run. Once youve made a successful prediction, avoid making another one, Moore quipped in 2015.

Even as technology continues to pervade our liveswith the advent of more specialized chips and materials, better software, cloud computing, and the promise of quantum computinghis law remains the benchmark and overarching narrative, both forecasting and describing our digital evolution.

Originally posted here:
What Is Moore's Lawand Did it Inspire the Computer Age? - zocalopublicsquare.org

Honeywell Achieves Breakthrough That Will Enable The Worlds Most Powerful Quantum Computer #47655 – New Kerala

The company also announced it has made strategic investments in two leading quantum computing software providers and will work together to develop quantum computing algorithms with JPMorgan Chase. Together, these announcements demonstrate significant technological and commercial progress for quantum computing and change the dynamics in the quantum computing industry.

Within the next three months, Honeywell will bring to market the world's most powerful quantum computer in terms of quantum volume, a measure of quantum capability that goes beyond the number of qubits. Quantum volume measures computational ability, indicating the relative complexity of a problem that can be solved by a quantum computer. When released, Honeywell's quantum computer will have a quantum volume of at least 64, twice that of the next alternative in the industry.

In a scientific paper that will be posted to the online repository arXiv later today and is available now on Honeywell's website, Honeywell has demonstrated its quantum charge coupled device (QCCD) architecture, a major technical breakthrough in accelerating quantum capability. The company also announced it is on a trajectory to increase its computer's quantum volume by an order of magnitude each year for the next five years.

This breakthrough in quantum volume results from Honeywell's solution having the highest-quality, fully-connected qubits with the lowest error rates.

Building quantum computers capable of solving deeper, more complex problems is not just a simple matter of increasing the number of qubits, said Paul Smith-Goodson, analyst-in-residence for quantum computing, Moor Insights & Strategy. Quantum volume is a powerful tool that should be adopted as an interim benchmarking tool by other gate-based quantum computer companies.

Honeywell Chairman and Chief Executive Officer Darius Adamczyk said companies should start now to determine their strategy to leverage or mitigate the many business changes that are likely to result from new quantum computing technology.

Quantum computing will enable us to tackle complex scientific and business challenges, driving step-change improvements in computational power, operating costs and speed, Adamczyk said. Materials companies will explore new molecular structures. Transportation companies will optimize logistics. Financial institutions will need faster and more precise software applications. Pharmaceutical companies will accelerate the discovery of new drugs. Honeywell is striving to influence how quantum computing evolves and to create opportunities for our customers to benefit from this powerful new technology.

To accelerate the development of quantum computing and explore practical applications for its customers, Honeywell Ventures, the strategic venture capital arm of Honeywell, has made investments in two leading quantum software and algorithm providers Cambridge Quantum Computing (CQC) and Zapata Computing. Both Zapata and CQC complement Honeywell's own quantum computing capabilities by bringing a wealth of cross-vertical market algorithm and software expertise. CQC has strong expertise in quantum software, specifically a quantum development platform and enterprise applications in the areas of chemistry, machine learning and augmented cybersecurity. Zapata creates enterprise-grade, quantum-enabled software for a variety of industries and use cases, allowing users to build quantum workflows and execute them freely across a range of quantum and classical devices.

Honeywell also announced that it will collaborate with JPMorgan Chase, a global financial services firm, to develop quantum algorithms using Honeywell's computer.

Honeywell's unique quantum computer, along with the ecosystem Honeywell has developed around it, will enable us to get closer to tackling major and growing business challenges in the financial services industry, said Dr. Marco Pistoia, managing director and research lead for Future Lab for Applied Research & Engineering (FLARE), JPMorgan Chase.

Honeywell first announced its quantum computing capabilities in late 2018, although the company had been working on the technical foundations for its quantum computer for a decade prior to that. In late 2019, Honeywell announced a partnership with Microsoft to provide cloud access to Honeywell's quantum computer through Microsoft Azure Quantum services.

Honeywell's quantum computer uses trapped-ion technology, which leverages numerous, individual, charged atoms (ions) to hold quantum information. Honeywell's system applies electromagnetic fields to hold (trap) each ion so it can be manipulated and encoded using laser pulses.

Honeywell's trapped-ion qubits can be uniformly generated with errors more well understood compared with alternative qubit technologies that do not directly use atoms. These high-performance operations require deep experience across multiple disciplines, including atomic physics, optics, cryogenics, lasers, magnetics, ultra-high vacuum, and precision control systems. Honeywell has a decades-long legacy of expertise in these technologies.

Today, Honeywell has a cross-disciplinary team of more than 100 scientists, engineers, and software developers dedicated to advancing quantum volume and addressing real enterprise problems across industries.

Honeywell (www.honeywell.com) is a Fortune 100 technology company that delivers industry-specific solutions that include aerospace products and services; control technologies for buildings and industry; and performance materials globally. Our technologies help aircraft, buildings, manufacturing plants, supply chains, and workers become more connected to make our world smarter, safer, and more sustainable. For more news and information on Honeywell, please visit http://www.honeywell.com/newsroom.

Originally posted here:
Honeywell Achieves Breakthrough That Will Enable The Worlds Most Powerful Quantum Computer #47655 - New Kerala

Why resilience is the key to future security – Raconteur

Resilience is at the heart of information security. As threats adapt and evolve and we accept that systems will be compromised, it is no longer enough just to have strong defences in place. The sophisticated tools and techniques of threat actors will find a way around them. Organisations, their security architecture, systems, policies and strategies need to be resilient, able to cope, recover and, most of all, to learn from incidents.

Our sector as a whole needs to be resilient; human skills and expertise are at the heart of this. We must attract, recruit and retain the talent and skills to tackle new and emerging risks and challenges. We must also embrace diversity in all its forms to find, nurture and train professionals.

It is the responsibility of every organisation to drive inclusivity and diversity in the industry. We should look beyond the traditional routes into information security and think about other transferable skills and attitudes that can offer so much. These include broader business skills, such as the ability to negotiate, financial acumen and leadership skill, that are increasingly needed as part of a modern-day security team.

It also includes skills from outside the industry, so it is encouraging to see organisations starting to recruit more people from sectors like healthcare, the emergency services, design and gaming.

But resilience goes much further than this. We, as infosecurity professionals, need to be resilient ourselves, developing new skills and, on a personal level, being resilient to the pressures and stress currently facing our industry.

Employee mental health and wellbeing should be an essential consideration for all employers and be part of company culture and organisational values. But perhaps we could do more in an industry that is faced with growing cyberthreats, longer working hours and individuals often having to make up gaps left by under-resourced teams. Its clear from what we are hearing from our community of chief information security officers that infosecurity professionals are under more pressure than ever before.

But with challenges come opportunities. The industry is undergoing a huge transformation as it embraces new and emerging technologies, such as quantum computing, data analytics and artificial intelligence tools, which can play a key role in enhancing the capabilities of security systems to identify and mitigate risks, and ease the pressure on security teams.

As an information and cybersecurity community, we can help to keep our world safe and unlock more of the good things that technology promises and delivers. There is no time like the future and, ultimately, it is in our hands. But this goes beyond just the information security industry and out to a wider group of individuals and organisations.

By working together, companies, governments, industry bodies, academia, suppliers and other stakeholders can share their knowledge and intelligence, learn from each other and get ahead of cybercriminals. This need to collaborate and share knowledge has never been more important as new kinds of threats emerge from new breeds of attackers, and we need to stay one step ahead.

Resilience is our conference theme this year, addressing the most relevant and decisive factors in information and cybersecurity in the next five years.

By building resilience across the industry, we can move towards a more secure world and a more secure future.

Here is the original post:
Why resilience is the key to future security - Raconteur