Media Search:



Machine learning calculates affinities of drug candidates and targets – Drug Target Review

A novel machine learning method called DeepBAR could accelerate drug discovery and protein engineering, researchers say.

A new technology combining chemistry and machine learning could aid researchers during the drug discovery and screening process, according to scientists at MIT, US.

The new technique, called DeepBAR, quickly calculates the binding affinities between drug candidates and their targets. The approach yields precise calculations in a fraction of the time compared to previous methods. The researchers say DeepBAR could one day quicken the pace of drug discovery and protein engineering.

Our method is orders of magnitude faster than before, meaning we can have drug discovery that is both efficient and reliable, said Professor Bin Zhang, co-author of the studys paper.The affinity between a drug molecule and a target protein is measured by a quantity called the binding free energy the smaller the number, the better the bind.A lower binding free energy means the drug can better compete against other molecules, meaning it can more effectively disrupt the proteins normal function.

Calculating the binding free energy of a drug candidate provides an indicator of a drugs potential effectiveness. However, it is a difficult quantity to discover.Methods for computing binding free energy fall into two broad categories:

The researchers devised an approach to get the best of both worlds. DeepBAR computes binding free energy exactly, but requires just a fraction of the calculations demanded by previous methods.

The BAR in DeepBAR stands for Bennett acceptance ratio, a decades-old algorithm used in exact calculations of binding free energy. Using the Bennet acceptance ratio typically requires a knowledge of two endpoint states, eg, a drug molecule bound to a protein and a drug molecule completely dissociated from a protein, plus knowledge of many intermediate states, eg, varying levels of partial binding, all of which slow down calculation speed.

DeepBAR reduces in-between states by deploying the Bennett acceptance ratio in machine learning frameworks called deep generative models.

These models create a reference state for each endpoint, the bound state and the unbound state, said Zhang. These two reference states are similar enough that the Bennett acceptance ratio can be used directly, without all the costly intermediate steps.

It is basically the same model that people use to do computer image synthesis, says Zhang. We are sort of treating each molecular structure as an image, which the model can learn. So, this project is building on the effort of the machine learning community.

These models were originally developed for two-dimensional (2D) images, said lead author of the study Xinqiang Ding. But here we have proteins and molecules it is really a three-dimensional (3D) structure. So, adapting those methods in our case was the biggest technical challenge we had to overcome.

In tests using small protein-like molecules, DeepBAR calculated binding free energy nearly 50 times faster than previous methods. The researchers add that, in addition to drug screening, DeepBAR could aid protein design and engineering, since the method could be used to model interactions between multiple proteins.

In the future, the researchers plan to improve DeepBARs ability to run calculations for large proteins, a task made feasible by recent advances in computer science.

This research is an example of combining traditional computational chemistry methods, developed over decades, with the latest developments in machine learning, said Ding. So, we achieved something that would have been impossible before now.

The research is published in Journal of Physical Chemistry Letters.

The rest is here:
Machine learning calculates affinities of drug candidates and targets - Drug Target Review

Quantum computer | computer science | Britannica

Quantum computer, device that employs properties described by quantum mechanics to enhance computations.

Britannica Quiz

Computers and Technology Quiz

Computers host websites composed of HTML and send text messages as simple as...LOL. Hack into this quiz and let some technology tally your score and reveal the contents to you.

As early as 1959 the American physicist and Nobel laureate Richard Feynman noted that, as electronic components begin to reach microscopic scales, effects predicted by quantum mechanics occurwhich, he suggested, might be exploited in the design of more powerful computers. In particular, quantum researchers hope to harness a phenomenon known as superposition. In the quantum mechanical world, objects do not necessarily have clearly defined states, as demonstrated by the famous experiment in which a single photon of light passing through a screen with two small slits will produce a wavelike interference pattern, or superposition of all available paths. (See wave-particle duality.) However, when one slit is closedor a detector is used to determine which slit the photon passed throughthe interference pattern disappears. In consequence, a quantum system exists in all possible states before a measurement collapses the system into one state. Harnessing this phenomenon in a computer promises to expand computational power greatly. A traditional digital computer employs binary digits, or bits, that can be in one of two states, represented as 0 and 1; thus, for example, a 4-bit computer register can hold any one of 16 (24) possible numbers. In contrast, a quantum bit (qubit) exists in a wavelike superposition of values from 0 to 1; thus, for example, a 4-qubit computer register can hold 16 different numbers simultaneously. In theory, a quantum computer can therefore operate on a great many values in parallel, so that a 30-qubit quantum computer would be comparable to a digital computer capable of performing 10 trillion floating-point operations per second (TFLOPS)comparable to the speed of the fastest supercomputers.

During the 1980s and 90s the theory of quantum computers advanced considerably beyond Feynmans early speculations. In 1985 David Deutsch of the University of Oxford described the construction of quantum logic gates for a universal quantum computer, and in 1994 Peter Shor of AT&T devised an algorithm to factor numbers with a quantum computer that would require as few as six qubits (although many more qubits would be necessary for factoring large numbers in a reasonable time). When a practical quantum computer is built, it will break current encryption schemes based on multiplying two large primes; in compensation, quantum mechanical effects offer a new method of secure communication known as quantum encryption. However, actually building a useful quantum computer has proved difficult. Although the potential of quantum computers is enormous, the requirements are equally stringent. A quantum computer must maintain coherence between its qubits (known as quantum entanglement) long enough to perform an algorithm; because of nearly inevitable interactions with the environment (decoherence), practical methods of detecting and correcting errors need to be devised; and, finally, since measuring a quantum system disturbs its state, reliable methods of extracting information must be developed.

Plans for building quantum computers have been proposed; although several demonstrate the fundamental principles, none is beyond the experimental stage. Three of the most promising approaches are presented below: nuclear magnetic resonance (NMR), ion traps, and quantum dots.

In 1998 Isaac Chuang of the Los Alamos National Laboratory, Neil Gershenfeld of the Massachusetts Institute of Technology (MIT), and Mark Kubinec of the University of California at Berkeley created the first quantum computer (2-qubit) that could be loaded with data and output a solution. Although their system was coherent for only a few nanoseconds and trivial from the perspective of solving meaningful problems, it demonstrated the principles of quantum computation. Rather than trying to isolate a few subatomic particles, they dissolved a large number of chloroform molecules (CHCL3) in water at room temperature and applied a magnetic field to orient the spins of the carbon and hydrogen nuclei in the chloroform. (Because ordinary carbon has no magnetic spin, their solution used an isotope, carbon-13.) A spin parallel to the external magnetic field could then be interpreted as a 1 and an antiparallel spin as 0, and the hydrogen nuclei and carbon-13 nuclei could be treated collectively as a 2-qubit system. In addition to the external magnetic field, radio frequency pulses were applied to cause spin states to flip, thereby creating superimposed parallel and antiparallel states. Further pulses were applied to execute a simple algorithm and to examine the systems final state. This type of quantum computer can be extended by using molecules with more individually addressable nuclei. In fact, in March 2000 Emanuel Knill, Raymond Laflamme, and Rudy Martinez of Los Alamos and Ching-Hua Tseng of MIT announced that they had created a 7-qubit quantum computer using trans-crotonic acid. However, many researchers are skeptical about extending magnetic techniques much beyond 10 to 15 qubits because of diminishing coherence among the nuclei.

Just one week before the announcement of a 7-qubit quantum computer, physicist David Wineland and colleagues at the U.S. National Institute for Standards and Technology (NIST) announced that they had created a 4-qubit quantum computer by entangling four ionized beryllium atoms using an electromagnetic trap. After confining the ions in a linear arrangement, a laser cooled the particles almost to absolute zero and synchronized their spin states. Finally, a laser was used to entangle the particles, creating a superposition of both spin-up and spin-down states simultaneously for all four ions. Again, this approach demonstrated basic principles of quantum computing, but scaling up the technique to practical dimensions remains problematic.

Quantum computers based on semiconductor technology are yet another possibility. In a common approach a discrete number of free electrons (qubits) reside within extremely small regions, known as quantum dots, and in one of two spin states, interpreted as 0 and 1. Although prone to decoherence, such quantum computers build on well-established, solid-state techniques and offer the prospect of readily applying integrated circuit scaling technology. In addition, large ensembles of identical quantum dots could potentially be manufactured on a single silicon chip. The chip operates in an external magnetic field that controls electron spin states, while neighbouring electrons are weakly coupled (entangled) through quantum mechanical effects. An array of superimposed wire electrodes allows individual quantum dots to be addressed, algorithms executed, and results deduced. Such a system necessarily must be operated at temperatures near absolute zero to minimize environmental decoherence, but it has the potential to incorporate very large numbers of qubits.

Read the original:
Quantum computer | computer science | Britannica

How and when quantum computers will improve machine learning? – Medium

The different strategies toward quantum machine learningThey say you should start an article with a cool fancy image. Google 72 qubits chip Sycamore Google

There is a strong hope (and hype) that Quantum Computers will help machine learning in many ways. Research in Quantum Machine Learning (QML) is a very active domain, and many small and noisy quantum computers are now available. Different approaches exist, for both long term and short term, and we may wonder what are their respective hopes and limitations, both in theory and in practice?

It all started in 2009 with the publications of the HHL Algorithm [1] proving an exponential acceleration for matrix multiplication and inversion, which triggered exciting applications in all linear algebra-based science, hence machine learning. Since, many algorithms were proposed to speed up tasks such as classification [2], dimensionality reduction [3], clustering [4], recommendation system [5], neural networks [6], kernel methods [7], SVM [8], reinforcement learning [9], and more generally optimization [10].

These algorithms are what I call Long Term or Algorithmic QML. They are usually carefully detailed, with guarantees that are proven as mathematical theorems. We can (theoretically) know the amount of speedup compared to the classical algorithms they reproduce, which are often polynomial or even exponential, with respect to the number of input data for most of the cases. They come with precise bounds on the results probability, randomness, and accuracy, as usual in computer science research.

While they constitute theoretical proof that a universal and fault-tolerant quantum computer would provide impressive benefits in ML, early warnings [11] showed that some underlying assumptions were very constraining.

These algorithms often require loading the data with a Quantum Random Access Memory, or QRAM [12], a bottleneck part without which exponential speedups are much more complex to obtain. Besides, they sometimes need long quantum circuits and many logical qubits (which, due to error correction, are themselves composed of many more physical qubits), that might not be arriving soon enough.

When exactly? When we will reach the Universal Fault-Tolerant Quantum Computer, predicted by Google in 2029, or by IonQ in only 5 years. More conservative opinion claim this will not happen before 20+ years, and some even say we will never reach that point. Future will tell!

More recently, a mini earthquake amplified by scientific media has cast doubt on the efficiency of Algorithm QML: the so-called dequantization papers [13] that introduced classical algorithms inspired from the quantum ones to obtain similar exponential speedups, in the field of QML at least. This impressive result was then hindered by the fact that the equivalent speedup only concerns the number of data, and comes at a cost of a terrible polynomial slowdown with respect to other parameters for now. This makes these quantum-inspired classical algorithms currently unusable in practice [14].

In the meantime, something very exciting happened: actual quantum computers were built and became accessible. You can play with noisy devices made of 5 to 20 qubits, and soon more. Quite recently Google performed a quantum circuit with 53 qubits [15], the first that could not be efficiently simulable by a classical computer.

Researchers have then been looking at new models that these noisy intermediate scale quantum computers (NISQ) could actually perform [16]. They are all based on the same idea of variational quantum circuits (VQC), inspired by classical machine learning.

The main difference with algorithmic QML is that the circuit is not implementing a known classical ML algorithm. One would simply hope that the chosen circuit will converge to successfully classify data or predict values. For now, there are several types of circuits in the literature [17] and we start to see interesting patterns in the success. The problem itself is often encoded in the loss function we try to decrease: we sum the error made compared to the true values or labels, or compared to the quantum states we aim for, or to the energy levels, and so on, depending on the task. Active research tries to understand why some circuits work better than others on certain tasks, and why quantumness would help.

Another core difference is that many providers [18, 19, 20] allow you to program these VQC so you can play and test them on actual quantum computers!

In recent years, researchers have tried to find use cases where Variational QML would succeed at classical problems, or even outperforms the classical solutions [21, 22]. Some hope that the variational nature of the training confers some resilience to hardware noise. If this happens to be the case, it would be beneficial not to wait for Error Correction models that require many qubits. One would only need Error Mitigation techniques to post-process the measurements.

On the theoretical side, researchers hope that quantum superposition and entangling quantum gates would project data in a much bigger space (the Hilbert Space of n qubits has dimension 2^n) where some classically inaccessible correlations or separations can be done. Said differently, some believe that the quantum model will be more expressive.

It is important to notice that research on Variational QML is less focused on proving computational speedups. The main interest is to reach a more expressive or complex state of information processing. The two approaches are related but they represent two different strategies. Unfortunately, less is proven compared to Algorithmic QML, and we are far from understanding the theoretical reasons that would prove the advantage of these quantum computations.

Of course, due to the limitations of the current quantum devices, experiments are often made on a small number of qubits (4 qubits in the above graph) or on simulators, often ideal or limited to 30+ qubits. It is hard to predict what will happen when the number of qubits will grow.

Despite the excitement, VQC also suffers from theoretical disturbance. It is proven that when the number of qubits or the number of gates becomes too big, the optimization landscape will be flat and hinder the ability to optimize the circuit. Many efforts are made to circumvent this issue, called Barren Plateaus [23], by using specific circuits [24] or smart initialization of the parameters [25].

But Barren Plateaus are not the only caveat. In many optimization methods, one must compute the gradient of a cost function with respect to each parameter. Said differently, we want to know how much the model is improved when I modify each parameter. In classical neural networks, computing the gradients is usually done using backpropagation because we analytically understand the operations. With VQC, operations become too complex, and we cannot access intermediate quantum states (without measuring and therefore destroying them).

The current state-of-the-art solution is called the parameter shift rule [27, 28] and requires to apply the circuit and measure its result 2 times for each parameter. By comparison, in classical deep learning, the network is applied just once forward and once backward to obtain all thousand or millions gradients. Hopefully, we could parallelize the parameter shift rule on many simulators or quantum devices, but this could be limited for a large number of parameters.

Finally, researchers tend to focus more and more on the importance of data loading into a quantum state [29], also called feature map [30]. Without the ideal amplitude encoding obtained with the QRAM, there are doubts that we will be able to load and process high dimensional classical data with an exponential or high polynomial factor. Some hope remains on data independent tasks such as generative models [21, 31] or solving partial differential equations.

Note that the expression Quantum Neural Networks has been used to show the similarities with classical Neural Networks (NN) training. However they are not equivalent, since the VQC dont have the same hidden layers architecture, and neither have natural non linearities, unless a measurement is performed. And theres no simple rule to convert any NN to a VQC or vice versa. Some now prefer to compare VQC to Kernel Methods [30].

We now have a better understanding of the advantages and weaknesses of the two main strategies towards quantum machine learning. Current research is now focused on two aspects:

Finally, and most importantly, improve the quantum devices! We all hope for constant incremental improvements or a paradigm shift in the quality of the qubits, their number, the error correction process, to reach powerful enough machines. Please physicists, can you hurry?

PS: lets not forget to use all this amazing science to do good things that will benefit everyone.

Jonas Landman is a Ph.D. student at the University of Paris under the supervision of Prof. Iordanis Kerenidis. He is Technical Advisor at QC Ware and member of QuantX. He has previously studied at Ecole Polytechnique and UC Berkeley.

Read more:
How and when quantum computers will improve machine learning? - Medium

7 Posting Tips to Help Boost Your Personal Brand on LinkedIn – Social Media Today

Believe it or not, you already have a personal brand. The question is, are you leveraging your personal brand to monetize your expertise or accelerate your success?

Personal branding is the process of marketing yourself, and your career or business, in order to attract relevant opportunities. Marketing, in this context, means getting people to know, like and trust you, so that they'll eventually want to work with you or buy from you.

Content marketing,on the other hand, is the strategic process of creating and distributing content to attract a targeted audience. And on LinkedIn, your content strategy has a huge role to play in successfully building your personal brand.

So what does this mean for you?

With your expertise, and your drive to create relevant, useful and engaging content, you'll be on your way to building a powerful personal brand that'll give you "permission" to monetize your expertise using LinkedIn features.

In this post, we'll look at seven content marketing strategies and tips that'll help you boost your personal brand on LinkedIn.

Let's get started.

The thing that will differentiate you from everyone else on LinkedIn is providing super-valuable content, that people simply cannot resist.

Im talking about providing learning and networking opportunities that are relevant and useful to your network,that people will be willing to spend 10 or 30 minutes -or even an hour or more - consuming your content or joining your event.

Now, you can easily host LinkedIn Live if you have access to it, or maybe organize free webinars that offer value to your target audience. You can also leverage LinkedIn Event Pages to promote your events, and get people to register.

For example, in February, I hosted a LinkedIn Local Philippines - 2nd Virtual Panel Discussion through VB Consulting and invited speakers to share their insights with the audience.

And in January, I launched a free on-demand video series to help those who would like to use LinkedIn to land a job during the pandemic.

Here are some examples of highly valuable content that you could give away to or share with your LinkedIn network:

Instead of simply sharing any existing content that you've created for your general audience, try creating exclusive content for your LinkedIn network. After all, if you've been highly strategic in building your professional network on LinkedIn, they will mostly be your target market.

Industry influencers are influencers for a reason: People follow them.

Building relationships with influencers and mentioning them in your posts can help boost your visibility on LinkedIn - here are some examples:

Peter Brace mentions Amy Edmondson and Timothy Clark, among the pioneers in the field of psychological safety

Raymond Domingo mentions Robina Gokongwei-Pe, a highly reputable entrepreneur and President/CEO of one of the largest multi-format retailers in the Philippines

Anda Goseco mentions Marcia Reynolds, an Executive and Leadership Coach based in the US

Peter, Raymond and Anda didnt really talk about themselves in their posts; instead, they talked about the influencers they mentioned.

So what can we learn from these posts? If youre making this type of post, remember to make it about them,the influencers,not about you.

While expanding your reach on LinkedIn by mentioning influencers who engage with your post is a good strategy, another strategy that works is the opposite -this time, if you already have a huge network, why not leverage your network to help others build their LinkedIn presence?

This is a win-win strategy. You win because you expand your reach to other peoples networks, and they win because they also become visible to your network. It also helps you build a strong community on the platform.

I've been using this strategy in one of the longest-running initiatives that I started in 2018 -the Top 100 Filipinos to Follow on LinkedIn for Inspiration and Learning.

This is not just about recognizing people who are actively sharing content on LinkedIn, but also about encouraging more people who are new to the platform to become more active on LinkedIn.

When youre on LinkedIn, being aware of, and sensitive to what is happening in your community is important.

This proved to be a super valuable tip when the pandemic began in 2020.At that time, people were losing their jobs, employees were forced to work from home unprepared, companies were turning to their business continuity plans, and the general public was forced to stay home.

Here in the Philippines, the first lockdown was declared on the 2nd week of March in 2020. With this context in mind, you cant be posting content as if everything was business as usual - you need to revise your content plan to ensure you remain useful and relevant.

And during these challenging times, showing empathy in an authentic post can go a long way.

In this post below, Edward Musiak, an Australian who lives in the Philippines, posted about his experience during the early lockdown period in Manila.

Although Edward usually posts about sales and mental health, which are his expertise and advocacy, this post was unusual,but because he felt the need to share his insights about what he had been experiencing, as well as what he had seen others doing as a result of the lockdown in Manila, he posted about it.

As you can see, this more personal, insightful, empathetic update gained huge traction on LinkedIn.

Talking about our successes is easy, but bringing up failure is hard. And what Ive learned on LinkedIn is that if you truly want to build a personal brand that will resonate with people, and that will get people to want to know more about you, to like you for who you are and to trust you for showing up, then you have to embrace vulnerability.

Being vulnerable means giving yourself permission to be yourself, and showing up when you have to.Being vulnerable also means showing up to your network as a relatable person who is not perfect, not all-knowing, and not worried about being judged by others.

Vulnerability builds connection, and connection builds trust. That trust is the one thing you need to create more opportunities for yourself and for others.

One of my most valuable posts of all time,in terms of the number of leads generated by a single post,remains to be this post where I shared how I was rejected by LinkedIn in 2015 when I applied at LinkedIn Singapore:

When was the last time you shared your story on LinkedIn?

One of things I've learned through the years is that people on LinkedIn either know what they want to achieve through the platform, or they dont know at all what they want to achieve.

Although it may seem like the ones who know what they want to achieve would be more successful on LinkedIn, I've learned that this is not always the case.

Many times, those whose top goal is to generate leads for their businesses are too focused on the goal of selling so they end up operating with a wrong mindset, thinking about what they can get in terms of immediate leads or sales.

But LinkedIn is not a place where people want to hear sales pitches all the time -LinkedIn is a place where people engage with other people who provide valuable content, and whose stories resonate with them.

And guess what -the more you share who you are, the more people gravitate towards you.And that means more opportunities for you to start conversations, and build meaningful business relationships.

In this post below, Peter openly shares a part of who he is that makes him different -a lifelong learner who entered university in his 50s, and finished his Ph. D. in his 60s:

What most people dont realize is this - knowing and being yourself is a free, tried-and-tested way to increase your reach and attract like-minded people on LinkedIn.

Peter wasnt sure at first if this post was "appropriate" on LinkedIn, but posting it anyway led him to the answer:

Being who you are, and sharing what makes you different, indeed, can have a place on LinkedIn.

7.Reshare your top-performing posts

Your top-performing posts performed well for a reason. Maybe they resonate well with your audience, or perhaps you posted it at the right time, when your network needed to read it the most.

Reposting your top-performing posts will not only ensure you get a lot of views and reactions (again), but it can also help you capture a whole new audience. Don't just post and forget, keep a record of your top-performing posts, and when the timing is right, go ahead and repost them.

In my case, I repost my top-performing content at least after 3-6 months. And they work like magic each time.

Below is an all-text post I shared in March 2020. This post reached over 97,500 people, and garnered almost 2,000 reactions and 81 comments.

The same post at the time was trending in #personalbranding:

I shared this post again this year. Here's the same post I just reshared three days ago (March 19, 2021):

And according to LinkedIn, this post garnered Top 1% engagement on the platform:

By creating content that resonates with, and engages your target audience, you can attract the right people that you would like to do business with. But of course, it takes a lot of time and patience, as well as a willingness to strategically curate and create content that adds value to your network and permits yourself to be authentic.

Try using these seven tips to take your LinkedIn presence to the next level.

See the original post here:
7 Posting Tips to Help Boost Your Personal Brand on LinkedIn - Social Media Today

Trends in marketing that drive customer experience – The Financial Express

Social media remains a double edged sword and needs skillful wielding.

By Piyali Chatterjee (Konar)

Traditionally, the role of marketing has always been about defining and introducing the brand to the customer. Today, customers are self-empowered, have unlimited access coupled with information, and are more discerning. Customers are no longer relying solely on advertising of the brand; in turn marketers are required to increase their capacity from just defining and managing the brand to also focusing on strengthening the customer-brand relationship.

It has become vital for marketers to not just understand the business, but to also understand how brand value can be delivered via customer experience. This shift has led to customer experience (CX) becoming critical to the ultimate success of an organisations marketing strategy. The pandemic has certainly changed the Customer Experience (CX) landscape. The change in CX is reflecting in what customers value, how they want to interact with the brand, and what they expect as benefits for their loyalty.

Marketers today are embracing technologies such as AI, ML, Robotics, Algorithms, etc. to ensure a superior customer experience. These tools allow brands to;

Gather and analyse social, historical, and behavioral data about customers to gain a better understanding of the customers Offer better customer service Commence digitally customer self-service tools Eliminate some of the pain points in the customer journey.

Driving Personalisation with Intent:

Brands ahead of the CX curve are driving personalisation with the intent to provide greater value to customers in terms of time savings, effort, and a better product-service fit. When there is intent, the personalisation drives customer loyalty and retention. The 2021 NPS DIGIPAY study conducted by Hansa Research puts Amazon Pay as the leader as far as Net Promoter Score (NPS) is concerned along with MobiKwik and ICICI Pockets. Market leaders like Paytm, PhonePe, Google Pay score much lower. Net Promoter Score is a trusted customer loyalty metric used by brands to measure the health of customer relationships. Amazon Pay has all the components such as easy gauging of instructions for usage, ensuring the drive of personalisation with the intent of ensuring quick and seamless transaction. The fact that the user already has established trust in the Amazon brand having used it all these years helps Amazon Pay in making inroads into creating the right CX for the customer thus boosting its NPS.

Measuring and Managing Social CX:

Social media is no longer a place just to send out information. It is emerging as a highly interactive channel between the brand and its customers. It is key to driving customer experience on the brands social media platforms through consistency, responsiveness, and transparency. Marketers need to recognise that Social media is now a valuable customer support channel and also a vital listening tool for insights. Mobikwik in recent times has faced backlash because of a rumored charge on dormant accounts being levied on its platform. Mobikwik was quick to take stock of the situation and put in corrective measures to reach out to irate customers who wanted to disengage from the platform.

Social media remains a double edged sword and needs skillful wielding. It can however play a pivotal role in enhancing the CX of a brand manifold if used well.

Include Focus on Empathy and Personal Safety to remain relevant and connected to the customer:

The new normal warrants, brands focus on being more empathetic to customer needs. This pandemic has resulted in organisations demonstrating generosity by having flexible processes, listening to customers better to resonate with them at an emotional level. Personal safety is another critical element and is now a basic expectation in the minds of the customer e.g. mobile and contactless pickup or check-in options and it is here to stay. The sudden explosion of digital payments across the country has been massively aided by the need to go contactless. QR codes at every vendor including the bhel puri wala and the kirana store has helped digital payments gain massive traction. This is in turn has helped the local vendor showcase empathy towards his customers need to stay safe during the pandemic and also helped the CX of the digital payment brand that facilitates quick and seamless exchange of money.

Keeping Human Customer Support accessible, while digitising:

While self-service tools have huge advantages in terms of costs to organisations, the human touch is still very essential in helping build trust in the mind of the customer. The value of being able to connect with a person when there is a concern or a special situation should not be undermined. Marketers need to create frictionless paths to contact their human support team. Inability to do so could lead to customers defecting in the long term. Tata Sky, a popular DTH brand of Tata is known for its great customer support. The customer service person from Tata Sky always makes sure to address all queries and assist the customer in making sure that the service runs without any hindrance. This in turn means that Tata Sky would still score high on CX and also retain the customers trust. This helps the brand in keeping a customer base that is currently disrupted by the rise of OTT platforms.

Surprise customers to delight:

Today, engaging customers is one of the challenges being faced by all brands. One great way of engaging customers is through the element of surprise. This can be a single big surprise or a steady stream of small surprises that build over time. It can be used successfully to make customers start generating positive word of mouth;After the last test in the Australia vs India test series and post Indias nail biting win in the last test at The Gabba Zomato did a spot promotion using its social media platforms to celebrate the same. Coupon codes like PANT and THE GABBA were promoted which afforded the customers to get sizable discounts on their orders that evening. Zomato was thus able to be topic relevant, showcased how cool it was and scored big points on CX by getting a cricket mad nation to flood its platform with orders to celebrate the win. This was a great example of how customers can be pleasantly surprised into using a product and generating great CX.

The author is SVP of Hansa Research (CX Vertical). Views expressed are personal.

Follow us on Twitter, Instagram, LinkedIn,Facebook

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, Check out latest IPO News, Best Performing IPOs, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

BrandWagon is now on Telegram. Click here to join our channel and stay updated with the latest brand news and updates.

Read the original here:
Trends in marketing that drive customer experience - The Financial Express