Media Search:



2nd Amendment now part of Constitution – The Herald

The Herald

Herald Reporter

The Second Amendment to the Constitution of Zimbabwe came into force yesterday following the required assent by more than two thirds of the National Assembly, two thirds of the Senate and the consent of the President.

The Amendment Act, gazetted yesterday, deals with a number of issues: The selection of vice-presidents and who fills a presidential vacancy, the number of non-parliamentary technocrats in Cabinet, the retention of the specially elected extra women members of the National Assembly for another 10 years with more detail given of who must be in that group, the addition of 10 specially elected extra youth members, the terms of office of judges and other matters concerning their conditions, the appointment of the Chief Secretary to the Office of the President and Cabinet, matters relating to how the independent Prosecutor General can be removed, the make-up of the provincial and metropolitan councils, and the addition of extra women councillors in local authorities.

Under the Constitution there was a temporary arrangement for the first two Presidential terms, whereby the President would appoint up to two vice presidents and basically they served on the same terms as other Cabinet members.

When there was a Presidential vacancy, through death, resignation or impeachment, the political party that had nominated the winning candidate who had just died or left office would name the successor to serve out the rest of the term.

From 2023 the system was supposed to change to an arrangement where every Presidential candidate would select in advance the two candidates for First Vice-President and Second Vice President and the three would stand as a single ticket. There was to be automatic promotion in the event of a Presidential or First Vice-Presidential vacancy, with arrangements set out for a new Second Vice-President.

Under the Second Amendment Act the temporary arrangement has been made permanent.

There were arguments that this system now made permanent allowed a President to name their successor. In fact the opposite is true. Under the default 2023 new system the President would, in effect, be naming theirsuccessor in advance. Under the temporary system now made permanent, the party that won the Presidency makes that choice, with the only condition being that the successor must meet the constitutional qualifications for President.

The only time in independent Zimbabwe when there was a vacancy in the Presidency, in 2017 after the resignation of Robert Mugabe, the winning party, Zanu PF through a central committee vote, chose Emmerson Mnangagwa to serve out the term to mid 2018.

In the 2013 Constitution there was a provision for the life of two Parliaments, 10 years, for an additional 60 women members of the National Assembly to join the 210 members elected in constituencies. These 60, six from each province, would be elected by proportional representation based on the votes cast for each party during the constituency elections in that province.

The time limit was set on the assumption that over a decade parties would be nominating more women for constituency seats and that something close to gender parity would be obtained without the extra seats.

This is now considered to be work in progress rather than attainable soon, so the arrangement has been extended for another two Parliaments, that is another 10 years.

However, the amendment does now require that the party lists used for elections ensure that at least 10 of the extra women are under 35, that women with disabilities are represented on the lists, and that an Act of Parliament is passed that gives the terms of how young women with disabilities are represented on the lists.

This is likely to be an amendment to the Electoral Act.

The constitutional amendment, besides ensuring that there must be at least 10 young women, also adds an extra 10 seats to the National Assembly for people aged 21 to 35, one from each province but elected by proportional representation using the national constituency vote for each party.

Each partys list must have men and women alternating in the list. The result of the two clauses will produce a minimum of 20 MPs under 35, five men and 15 women, although younger people can still win constituency seats or be nominated for more of the special womens seats.

A batch of clauses deal with the judiciary. For initial appointment to the bench, the amendment retains the present system of nomination, interviews, and a short list of three names submitted to the President. But this system no longer applies for promotions on the bench.

The President, acting on the recommendation of the Judicial Service Commission, can promote a judge to a higher court.

The retirement age of judges is now set at 70 but they can elect to serve, so long as they decide before their 70th birthday, until they are 75 although must submit a medical report that confirms they are mentally and physically fit to remain in office.

Constitutional Court judges now serve a single 15-year term and cannot be reappointed. But if they are still under 75 at the completion of that term they have the option of returning to the Supreme or High Court.

In a brief clause the Civil Service is now called the Public Service, but more importantly 10 percent of new appointments to the Public Service must now be people with disabilities.

The post of Chief Secretary to the Office of the President and Cabinet is now a Constitutional post with the holder and their deputies appointment by the President after consulting the Public Service Commission.

But the Constitution now formally names the Chief Secretary as the most senior member of the Public Service, which has been the case but not formally, and makes it clear that Permanent Secretaries shall report to the Chief Secretary on any matter affecting them as a class.

That basically also makes the group of top public servants a constitutional class that can act together.

The Prosecutor General in the 2013 Constitution basically served on the same conditions as a judge, with the same dismissal procedure to ensure independence.

The amended section retains the independence and the need for a tribunal, consisting of two present or past Supreme Court judges and a High Court judge or a person qualified to be a judge. The President appoints the tribunal if he considers the question of removal needs to be investigated.

The slight differences from the procedure for a tribunal investigating a judge, where the Judicial Service Commission needs to make a recommendation before the President appoints a tribunal, was to make it clear that while the Prosecutor General is the leading lawyer who appears in court, they are still below a judge in status.

With the Second Republic taking devolution seriously, such as granting budgets to local authorities, and wanting effective provincial councils as the top tier of the devolved structures, the amendment Act goes into the membership of these councils to ensure that the members come from the bottom up with no one moving in from the top down.

So MPs and senators are barred from sitting on councils, largely because Parliament is expected to oversee the councils and no one can be a judge of themselves.

There is a small naming change; the provincial councils for metropolitan provinces are now called metropolitan councils.

The members of each council are the chairperson, who is chosen by the rest of the council from two candidates submitted by the political party that won the most National Assembly seats in the last general election, or if there is no single party in that position, then the party that won the most National Assembly votes. This was already in the Constitution.

This means that while the winning party in a province names the chairperson, the rest of the council members can have an input into which person from that party they would prefer in the centre chair.

Every mayor and chairperson of every urban and rural local authority in the province, regardless of what they are called, sits by right as a member of the provincial or metropolitan council.

Then in addition, there must be 10 women members elected from party lists on the basis of how their parties performed in the last Parliamentary election. But women with disabilities must now be included on those lists. The details will be set in amendments to the Electoral Law.

Local authorities now also get specially elected extra women councillors. An Act of Parliament is now needed to allow at least 30 percent extra seats reserved for women using proportional representation on party lists based on how the party candidates performed in the last general election for the council.

To clear one slightly ambiguous area over international agreements with financial commitments, the amendment Act now states that except for loan agreements and guarantees, already dealt with in the Constitution, any agreement not an international treaty but which is concluded or executed by the President with foreign organisations or entities and which imposes fiscal obligations on Zimbabwe does not bind Zimbabwe until approved by Parliament.

Go here to see the original:
2nd Amendment now part of Constitution - The Herald

AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications – MedTech Intelligence

An increasing number of medical devices incorporate artificial intelligence (AI) capabilities to support therapeutic and diagnostic applications. In spite of the risks connected with this innovative technology, the applicable regulatory framework does not specify any requirements for this class of medical devices. To make matters even more complicated for manufacturers, there are no standards, guidance documents or common specifications for medical devices on how to demonstrate conformity with the essential requirements.

The term artificial intelligence (AI) describes the capability of algorithms to take over tasks and decisions by mimicking human intelligence.1 Many experts believe that machine learning, a subset of artificial intelligence, will play a significant role in the medtech sector.2,3 Machine learning is the term used to describe algorithms capable of learning directly from a large volume of training data. The algorithm builds a model based on training data and applies the experience, it has gained from the training to make predictions and decisions on new, unknown data. Artificial neural networks are a subset of machine learning methods, which have evolved from the idea of simulating the human brain.22 Neural networks are information-processing systems used for machine learning and comprise multiple layers of neurons. Between the input layer, which receives information, and the output layer, there are numerous hidden layers of neurons. In simple terms, neural networks comprise neurons also known as nodes which receive external information or information from other connected nodes, modify this information, and pass it on, either to the next neuron layer or to the output layer as the final result.5 Deep learning is a variation of artificial neural networks, which consist of multiple hidden neural network layers between the input and output layers. The inner layers are designed to extract higher-level features from the raw external data.

The role of artificial intelligence and machine learning in the health sector was already the topic of debate well before the coronavirus pandemic.6 As shown in an excerpt from PubMed several approaches for AI in medical devices have already been implemented in the past (see Figure 1). However, the number of publications on artificial intelligence and medical devices has grown exponentially since roughly 2005.

Artificial intelligence in the medtech sector is at the beginning of a growth phase. However, expectations for this technology are already high, and consequently prospects for the digital future of the medical sector are auspicious. In the future, artificial intelligence may be able to support health professionals in critical tasks, controlling and automating complex processes. This will enable diagnosis, therapy and care to be optimally aligned to patients individual needs, thereby increasing treatment efficiency, which in turn will ensure an effective and affordable healthcare sector in the future.4

However, some AI advocates tend to overlook some of the obstacles and risks encountered when artificial intelligence is implemented in clinical practice. This is particularly true for the upcoming regulation of this innovative technology. The risks of incorporating artificial intelligence in medical devices include faulty or manipulated training data, attacks on AI such as adversarial attacks, violation of privacy and lack of trust in technology. In spite of these technology-related risks, the applicable standards and regulatory frameworks do not include any specific requirements for the use of artificial intelligence in medical devices. After years of negotiations in the European Parliament, Regulation (EU) 2017/745 on medical devices and Regulation (EU) 2017/746 on in-vitro diagnostic medical devices entered into force on May 25, 2017. In contrast to Directives, EU Regulations enter directly into force in the EU Member States and do not have to be transferred into national law. The new regulations impose strict demands on medical device manufacturers and the Notified Bodies, which manufacturers must involve in the certification process of medical devices and in-vitro diagnostic medical devices (excluding class I medical devices and nonsterile class A in-vitro diagnostic medical devices, for which the manufacturers self-declaration will be sufficient).

Annex I to both the EU Regulation on medical devices (MDR) and the EU Regulation on in vitro diagnostic medical devices (IVDR) define general safety and performance requirements for medical devices and in-vitro diagnostics. However, these general requirements do not address the specific requirements related to artificial intelligence. To make matters even more complicated for manufacturers, there are no standards, guidance documents or common specifications on how to demonstrate conformity with the general requirements. To place a medical device on the European market, manufacturers must meet various criteria, including compliance with the essential requirements and completion of the appropriate conformity assessment procedure. By complying with the requirements, manufacturers ensure that their medical devices fulfill the high levels of safety and health protection required by the respective regulations.

To ensure the safety and performance of artificial intelligence in medical devices and in-vitro diagnostics, certain minimum requirements must be fulfilled. However, the above regulations define only general requirements for software. According to the general safety and performance requirements, software must be developed and manufactured in keeping with the state of the art. Factors to be taken into account include the software lifecycle process and risk management. Beyond the above, repeatability, reliability and performance in line with the intended use of the medical device must be ensured. This implicitly requires artificial intelligence to be repeatable, performant, reliable and predictable. However, this is only possible with a verified and validated model. Due to the absence of relevant regulatory requirements and standards, manufacturers and Notified Bodies are determining the state of the art for developing and testing artificial intelligence in medical devices, respectively. During the development, assessment and testing of AI, fundamental differences between artificial intelligence (particularly machine learning) and conventional software algorithms become apparent.

Towards the end of 2019, and thus just weeks before the World Health Organizations (WHO) warning of an epidemic in China, a Canadian company (BlueDot) specializing in AI-based monitoring of the spread of infectious diseases alerted its customers to the same risk. To achieve this the companys AI combed through news reports and databases of animal and plant diseases. By accessing global flight ticketing data, the AI system correctly forecast the spread of the virus in the days after it emerged. This example shows the high level of performance that can already be achieved with artificial intelligence today.7 However, it also reveals one of the fundamental problems encountered with artificial intelligence: Despite the distribution of information of the outbreak to various health organizations in different countries, international responses were few. One reason for this lack of response to the AI-based warning is the lack of trust in technology that we do not understand, which plays a particularly significant role in medical applications.

In clinical applications, artificial intelligence is predominantly used for diagnostic purposes. Analysis of medical images is the area where the development of AI models is most advanced. Artificial intelligence is successfully used in radiology, oncology, ophthalmology, dermatology and other medical disciplines.2 The advantages of using artificial intelligence in medical applications include the speed of data analysis and the capability of identifying patterns invisible to the human eye.

Take the diagnosis of osteoarthritis, for example. Although medical imaging enables healthcare professionals to identify osteoarthritis, this is generally at a late stage after the disease has already caused some cartilage breakdown. Using an artificial-intelligence system, a research team led by Dr. Shinjini Kundu analyzed magnetic resonance tomography (MRT) images. The team was able to predict osteoarthritis three years before the first symptoms manifested themselves.8 However, the team members were unable to explain how the AI system arrived at its diagnosis. In other words, the system was not explainable. The question now is whether patients will undergo treatment such as surgery, based on a diagnosis made by an AI system, which no doctor can either explain or confirm.

Further investigations revealed that the AI system identified diffusion of water into cartilage. It detected a symptom invisible to the human eye and, even more important, a pattern that had previously been unknown to science. This example again underlines the importance of trust in the decision of artificial intelligence, particularly in the medtech sector. Justification of decisions is one of the cornerstones of a doctor-patient (or AI-patient) relationship based on mutual trust. However, to do so the AI system must be explainable, understandable and transparent. Patients, doctors and other users will only trust in AI systems if their decisions can be explained and understood.

Many medical device manufacturers wonder why assessment and development of artificial intelligence must follow a different approach to that of conventional software. The reason is based on the principles of how artificial intelligence is developed and how it performs. Conventional software algorithms take an input variable X, process it using a defined algorithm and supply the result Y as the output variable (if X, then Y). The algorithm is programmed, and its correct function can be verified and validated. The requirements for software development, validation and verification are described in the two standards IEC 62304 and IEC 82304-1. However, there are fundamental differences between conventional software and artificial intelligence implementing a machine learning algorithm. Machine learning is based on using data to train a model without explicitly programming the data flow line by line. As described above, machine learning is trained using an automated appraisal of existing information (training data). Given this, both the development and conformity assessment of artificial intelligence necessitate different standards. The following sections provide a brief overview of the typical pitfalls.

A major disadvantage of artificial intelligence, in particular machine learning based on neural networks, is the complexity of the algorithms. This makes them highly non-transparent, hence their designation of black-box AI (see Figure 2). The complex nature of AI algorithms not only concerns their mathematical description but alsoin the case of deep-learning algorithmstheir high level of dimensionality and abstraction. For these classes of AI, the extent to which input information contributes to a specific decision is mostly impossible to determine. This is why AI is often referred to as black box AI. Can we trust the prediction of the AI system in such a case and, in a worst-case scenario, can we identify a failure of the system or a misdiagnosis?

A world-famous example of the result of a black-box AI was the match between AlphaGo, the artificial intelligence system made by DeepMind (Google) and the Go world champion, Lee Sedol. In the match, which was watched by an audience of 60 million including experts, move 37 showed the significance of these particular artificial intelligence characteristics. The experts described the move as a mistake, predicting that AlphaGo would lose the match since in their opinion the move made no sense at all. In fact, they went even further and said, Its not a human move. Ive never seen a human play this move9.

None of them understood the level of creativity behind AlphaGos move, which proved to be critical for winning the match. While understanding the decision made by the artificial intelligence system would certainly not change the outcome of the match, it still shows the significance of the explainability and transparency of artificial intelligence, particularly in the medical field. AlphaGo could also have been wrong!

One example of AI with an intended medical use was the application of artificial intelligence for determining a patients risk of pneumonia. This example shows the risk of black-box AI in the MedTech sector. The system in question surprisingly identified the high-risk patients as non-significant.10 Rich Caruana, one of the leading AI experts at Microsoft, who was also one of the developers of the system, advised against the use of the artificial intelligence he had developed: I said no. I said we dont understand what it does inside. I said I was afraid.11

In this context, it is important to note that open or explainable artificial intelligence, also referred to as white box, is by no means inferior to black-box AI. While there have been no standard methods for opening the black box, there are promising approaches for ensuring the plausibility of the predictions made by AI models. Some approaches try to achieve explainability based on individual predictions on input data. Others, by contrast, try to limit the range of input pixels that impact the decisions of artificial intelligence.12

Medical devices and their manufacturers must comply with further regulatory requirements in addition to the Medical Device Regulation (MDR) and the In-vitro Diagnostics Regulation (IVDR). The EUs General Data Protection Regulation (GDPR), for instance, is of particular relevance for the explainability of artificial intelligence. It describes the rules that apply to the processing of personal data and is aimed at ensuring their protection. Article 110 of the Medical Device Regulation (MDR) explicitly requires measures to be taken to protect personal data, referencing the predecessor of the General Data Protection Regulation.

AI systems which influence decisions that might concern an individual person must comply with the requirements of Articles 13, 22 and 35 of the GDPR.

Where personal data are collected, the controller shall provide.the following information: the existence of automated decision-making and at least in those cases, meaningful information of the logic involved13

In simple terms this means, that patients who are affected by automated decision-making must be able to understand this decision and have the possibility to take legal action against it. However, this is precisely the type of understanding which is not possible in the case of black box AI. Is a medical product implemented as black-box AI eligible for certification as a medical device? The exact interpretation of the requirements specified in the General Data Protection Regulation is currently the subject of legal debate.14

The Medical Device Regulation places manufacturers under the obligation to ensure the safety of medical devices. Among other specifications, Annex I to the regulation includes, , requirements concerning the repeatability, reliability and performance of medical devices (both for stand-alone software and software embedded into a medical device):

Devices that incorporate electronic programmable systems, including software, shall be designed to ensure repeatability, reliability and performance in line with their intended use. (MDR Annex I, 17.1)15

Compliance with general safety and performance requirements can be demonstrated by utilizing harmonized standards. Adherence to a harmonized standard leads to the assumption of conformity, whereby the requirements of the regulation are deemed to be fulfilled. Manufacturers can thus validate artificial intelligence models in accordance with the ISO 13485:2016 standard, which, among other requirements, describes the processes for the validation of design and development in clause 7.3.7.

For machine learning two independent sets of data must be considered. In the first step, one set of data is needed to train the AI model. Subsequently, another set of data is necessary to validate the model. Validation of the model should use independent data and can also be performed by cross-validation in the meaning of the combined use of both data sets. However, it must be noted that AI models can only be validated using an independent data set. Now, which ratio is recommended for the two sets of data? This is not an easy question to answer without more detailed information about the characteristics of the AI model. A look at the published literature (state of the art) recommends a ratio of approx. 80% training data to approx. 20% validation data. However, the ratio being used depends on many factors and is not set in stone. The notified bodies will continue to monitor the state of the art in this area and, within the scope of conformity assessment, also request the reasons underlying the ratio used.

Another important question concerns the number of data sets. As the number of data sets depends on the following factors, this issue is difficult to assess, depending on:

Generally, the larger the number of data the more performant the model can be assumed to work. In their publication on speech recognition, Banko and Brill from Microsoft state, After throwing more than one billion words within context at the problem, any algorithm starts to perform incredibly well16

At the other end of the scale, i.e. the minimum number of data sets required, computational learning theory offers approaches for estimating the lower threshold. However, general answers to this question are not yet known and these approaches are based on ideal assumptions and only valid for simple algorithms.

Manufacturers need to look not only at the number of data, but also at the statistical distribution of both sets of data. To prevent bias, the data used for training and validation must represent the statistical distribution of the environment of application. Training with data that are not representative will result in bias. The U.S. healthcare system, for example, uses artificial intelligence algorithms to identify and help patients with complex health needs. However, it soon became evident that where patients had the same level of health risks, the model suggested Afro-Americans less often for enrolment in these special high-risk care management programs.17 Studies carried out by Obermeyer, et al. showed the cause for this to be racial bias in training data. Bias in training data not only involves ethical and moral aspects that need to be considered by manufacturers: it can also affect the safety and performance of a medical device. Bias in training data could, for example, result in certain indications going undetected on fair skin.

Many deep learning models rely on a supervised learning approach, in which AI models are trained using labelled data. In cases involving labelled data, the bottleneck is not the number of data, but the rate and accuracy at which data are labeled. This renders labeling a critical process in model development. At the same time, data labelling is error-prone and frequently subjective, as it is mostly done by humans. Humans also tend to make mistakes in repetitive tasks (such as labelling thousands of images).

Labeling of large data volumes and selection of suitable identifiers is a time- and cost-intensive process. In many cases, only a very minor amount of the data will be processed manually. These data are used to train an AI system. Subsequently the AI system is instructed to label the remaining data itselfa process that is not always error-free, which in turn means that errors will be reproduced.7 Nevertheless, the performance of artificial intelligence combined with machine learning very much depends on data quality. This is where the accepted principle of garbage in, garbage out becomes evident. If a model is trained using data of inferior quality, the developer will also obtain a model of the same quality.

Other properties of artificial intelligence that manufacturers need to take into account are adversarial learning problems and instabilities of deep learning algorithms. Generally, the assumption in most machine learning algorithms is that training and test data are governed by identical distributions. However, this statistical assumption can be influenced by an adversary (i.e., an attacker that attempts to fool the model by providing deceptive input). Such attackers aim to destabilize the model and to cause the AI to make false predictions. The introduction of certain adversarial patterns to the input data that are invisible to the human eye causes major errors of detection to be made by the AI system. In 2020, for example, the security company McAfee demonstrated their ability to trick Teslas Mobileye EyeQ3 AI System into driving 80 km/h over the speed limit, simply by adding a 5 cm strip of black tape to a speed limit sign.24

AI methods used in the reconstruction of MRT and CT images have also proved unstable in practice time and again. A study investigating six of the most common AI methods used in the reconstruction of MRT and CT images proved these methods to be highly unstable. Even minor changes in the input images, invisible to the human eye, result in completely distorted reconstructed image.18 The distorted images included artifacts such as removal of tissue structures, which might result in misdiagnosis. Such an attack may cause artificial intelligence to reconstruct a tumor at a location where there is none in reality or even remove cancerous tissue from the real image. These artifacts are not present when manipulated images are reconstructed using conventional algorithms.18

Another vulnerability of artificial intelligence concerns image-scaling attacks. This vulnerability has been known since as long ago as 2019.19 Image-scaling attacks enable the attacker to manipulate the input data in such a way that artificial intelligence models with machine learning and image scaling can be brought under the attackers control. Xiao et al., for example, succeeded in manipulating the well-known machine-learning scaling library, TensorFlow, in such a manner that attackers could even replace complete images.19 An example of such an image-scaling attack is shown in Figure 3. In this scaling operation, the image of a cat is replaced by an image of a dog. Image-scaling attacks are particularly critical as they can both distort training of artificial intelligence and influence the decisions of artificial intelligence trained using manipulated images.

Adversarial attacks and stability issues pose significant threats to the safety and performance of medical devices incorporating artificial intelligence. Especially concerning is the fact that the conditions of when and where the attacks could occur, are difficult to predict. Furthermore, the response of AI to adversarial attacks is difficult to specify. If, for instance, a conventional surgical robot is attacked, it can still rely on other sensors. However, changing the policy of the AI in a surgical robot might lead to unpredictable behavior and by this to catastrophic (from a human perspective) responses of the system. Methods to address the above vulnerabilities and reduce susceptibility to errors do exist. For example, the models can be subjected to safety training, making them more resilient to the vulnerabilities. Defense techniques such as adversarial training and defense distillation have already been practiced successfully in image reconstruction algorithms.21 Further methods include human-in-the-loop approaches, as humans performance is strongly robust against adversarial attacks targeting AI systems. However, this approach has limited applicability in instances where humans can be directly involved.25

Although many medical devices using artificial intelligence have already been approved, the regulatory pathways in the medtech sector are still open. At present no laws, common specifications or harmonized standards exist to regulate AI application in medical devices. In contrast to the EU authorities, the FDA published a discussion paper on a proposed regulatory framework for artificial intelligence in medical devices in 2019. The document is based on the principle of risk management, software-change management, guidance on the clinical evaluation of software and a best-practice approach to the software lifecycle.20 in 2021, the FDA published their action plan on furthering AI in medical devices. The action plan consists of five next steps, with the foremost being to develop a regulatory framework explicitly for change control of AI, a good machine learning practice and new methods to evaluate algorithm bias and robustness 26

In 2020 the European Union also published a position paper on the regulation of artificial intelligence and medical devices. The EU is currently working on future regulation, with a first draft expected in 2021.

Chinas National Medical Products Administration (NMPA) published the Technical Guiding Principles of Real-World Data for Clinical Evaluation of Medical Devices guidance document. It specifies obligations concerning requirements-analysis, data collection and processing, model definition, verification, and validation as well as post-market surveillance.

Japans Ministry of Health, Labour and Welfare is working on a regional standard for artificial intelligence in medical devices. However, to date this standard is available in Japanese only. Key points of assessment are plasticity the predictability of models, quality of data and degree of autonomy. 27

In Germany, the Notified Bodies have developed their own guidance for artificial intelligence. The guidance document was prepared by the Interest Group of the Notified Bodies for Medical Devices in Germany (IG-NB) and is aimed at providing guidance to Notified Bodies, manufacturers and interested third parties. The guidance follows the principle that the safety of AI-based medical devices can only be achieved by means of a process-focused approach that covers all relevant processes throughout the whole life cycle of a medical device. Consequently, the guidance does not define specific requirements for products, but for processes.

The World Health Organization, too, is currently working on a guideline addressing artificial intelligence in health care.

Artificial intelligence is already used in the medtech sector, albeit currently somewhat sporadically. However, at the same time the number of AI algorithms certified as medical devices has increased significantly over the last years.28 Artificial intelligence is expected to play a significant role in all stages of patient care. According to the requirements defined in the Medical Device Regulation, any medical device, including those incorporating AI, must be designed in such a way as to ensure repeatability, reliability and performance according to its intended use. In the event of a fault condition (single fault condition), the manufacturer must implement measures to minimize unacceptable risks and reduction of the performance of the medical device (MDR Annex I, 17.1). However, this requires validation and verification of the AI model.

Many of the AI models used are black-box models. In other words, there is no transparency in how these models arrive at their decisions. This poses a problem where interpretability and trustworthiness of the systems are concerned. Without transparent and explainable AI predictions, the medical validity of a decision might be doubted. Some current errors of AI in pre-clinical applications might fuel doubts further. Explainable and approvable AI decisions are a prerequisite for the safe use of AI on actual patients. This is the only way to inspire trust and maintain it in the long term.

The General Data Protection Regulation demands a high level of protection of personal data. Its strict legal requirements also apply to processing of sensitive health data in the development or verification of artificial intelligence.

Adversarial attacks aim at influencing artificial intelligence, both during the training of the model and in the classification decision. These risks must be kept under control by taking suitable measures.

Impartiality and fairness are important, safety-relevant, moral and ethical aspects of artificial intelligence. To safeguard these aspects, experts must take steps to prevent bias when training the system.

Another important question concerns the responsibility and accountability of artificial intelligence. Medical errors made by human doctors can generally be traced back to the individuals, who can be held accountable if necessary. However, if artificial intelligence makes a mistake the lines of responsibility become blurred. For medical devices on the other hand, the question is straightforward. The legal manufacturer of the medical device incorporating artificial intelligence must ensure the safety and security of the medical device and assume liability for possible damage.

Regulation of artificial intelligence is likewise still at the beginning of development involving various approaches. All major regulators around the globe have defined or are starting to define requirements for artificial intelligence in medical devices. A high level of safety in medical devices will only be possible with suitable measures in place to regulate and control artificial intelligencebut this must not impair the development of technical innovation.

Follow to Page 2 for References.

The Chinese government is investing heavily in the development of new technologies that leverage AI.

If youre looking to market your medical device, there are many tasks to complete.

The term "Big Data" is a few years old, but its implications for medical devices

The race to apply AI to medical treatment is rapidly accelerating in China and Japan.

Excerpt from:
AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications - MedTech Intelligence

Creative AI: Best Examples of Artificial Intelligence In Photography, Art, Writing, Music and Gaming – TechTheLead

What is creativity? If you look it up in the dictionary, its the use of imagination or original ideas to create something. Naturally, its always been the prerogative of humans, beings that can dream big and visualize concepts. Lately, though, researchers have been arguing that creativity is not only the characteristic of humans but of artificial intelligence, too. AIs may not daydream as we do but, in specific contexts, with key information at hand, they can write, picture, invent games, or beat others at them. These are some of the most extraordinary creative AI examples.

To an AI, creativity isnt exactly the power of imagination as much as it is the power of creation. Artificial intelligence uses past data to translate a certain event or environment, learning from it just enough to generate new things. Most of the time, convolutional neural networks are fed immense amounts of data and left to train using those as starting points. The algorithms must see the patterns in the input information to then generate new examples that could be plausible in a given context.

The answer is definitely yes, if you admit the definition of creativity proposed by science. There are dozens of examples in this sense, some more successful than others, starting from an AI that creates poems or stories to neural networks that can come up with names or give life to old photos.

Take, for instance, this Deep Nostalgia AI that turns old photos into video animations. Imagine areverse iPhone Live Photos where instead of picking the best frame from a short video, the program uses a vintage portrait photo and puts it into motion.

Another way to give life to old pictures is through colorization. So, a team trained an AI to fill in the blanks in a way that makes it possible to actually see Abraham Lincoln, Albert Einstein, Frida Kahlo, or Bertrand Russells true colors. The results are amazing!

Truly mind-blowing is the following GAN effort! NVIDIA managed to produce new individuals, new human faces starting from the photos of real people. Just look at them and tell me if you can tell who is computer-generated and who is an actual person!

And AI hasnt stopped here. One AI fed with pictures of more than 1,000 classical sculptures managed to produce Dio, a unique sculpture. Ironically, Dio was built from the remains of the computer used to generate it.

Im not kidding. After trying their hand at retouching, colorizing and even creating portraits from scratch, AI programs were put to write. What? Pickup lines, poems, love songs, and even horrors.

In this case, getting it just right wasnt the main goal. In fact, the teams training the neural networks probably hoped for a good laugh at most.

What did they get?

Quirky ways to flirt, for one. Then, two-line poems written after a thorough study of over 20 million words coming from 19th-century poetry. Last, but not least, a love song that nobody should have to listen to Downtiration, Tender Love. Its cringey, at best.

At the opposite spectrum is this love poem from a machine trained by researchers at Microsoft and professors at Kyoto University. After being trained on thousands of images juxtaposed with human-written descriptions and poems about each image, the AI wrote a decent piece that could pass as avantgarde.

The most popular AI writer by far, however, is Shelley. This time it was MIT that gave an AI the power of storytelling and stories it wrote, from random snippets based on what it learned, to contributions on a given text. It all culminated with Shelley breaking the fourth wall and inviting users on Twitter to help her write better and more.

Writing horror stories may seem a fun, sometimes easy task. But defending your intentions to humans? Thats not for the faint of heart. Luckily, GPT-3, OpenAIs powerful new language generator, lacks a heart and was able to address humanity in a deeply moving essay.

Going from poetry to music is a piece of cake. So, researchers leveled up and gave AI the task to compose lyrics and even entire music albums.

One of them generated scary music in time for Halloween. The uncanny resemblance of this AI-generated playlist to horror movie soundtracks has an explanation. MIT trained the neural network on thousands of songs from cult scary movies to produce a new sound. Scary or just unnatural? Listen here and let me know!

Another AI was trained to come up with a new Swift song. To manage that, the neural network was trained on a massive amount of Tay lyrics. Unfortunately, its creation wasnt able to pass for a TSwift song.

Taylor Swift wasnt the only singer AI tried to replace on stage. Eminem and Kanye came up next, although in their case it was more of a deepfake situation. Both artists changed lanes and started rapping about female power. Check out their collab here!

Finally, this AI went above and beyond with its music skills. It helped an artistcompose and produce an entire music album. No need for a crew!

Have you heard of the God of Go? You must have. AlphaGo is the strongest player of Go in history and surprise, surprise, its not human. DeepMind, Alphabets subsidiary that is in charge of artificial intelligence development,is the creator of the most powerful AI player of Go. Their Zero rendition proved that it doesnt need to observe humans before challenging a Go player.In fact, it can play against its predecessor already a worldwide champion and beat them! Want to find more about this extraordinary program? Read about it here!

Defeating humans at their own game is satisfying in itself but coming up with a whole new game well, thats awe-inspiring. A neural network trained on data from over 400 existing sports created Speedgate and its logo!

For those who prefer more static hobbies, a knitting AI could come in handy. InverseKnit showed it could do a more than decent job with fingerless gloves from a very specific type of acrylic yarn but it would be easy to train it on more materials. In the end, researchers would like to make InverseKnit available to the public.

Now, if you think knitter is the weirdest job an AI could have, try again. One machine learning program simulated the voice and facial features of a known Chinese TV anchor, making the case for a non-stop TV host.

In a different corner of the world, an advertising competition in Belgium had an AI judge in its panel to pick the winning campaign. Surprisingly, the AI made the same choice as the human judges, proving its worthiness.

Finally, an AI took upon itself the task of naming adoptable kittens. Sure, some of its suggestions were downright terrifying like Warning Signs, Bones of The Master, and Kill All Humansbut the AI did manage to find some ingenious ones.

That goes to show that artificial intelligence doesnt need to be inventive to be creative. For AI, the sky is the limit as long as humans fill that gap.

Facebook Twitter LinkedIn Reddit Pinterest

Subscribe to our website and stay in touch with the latest news in technology.

You will soon receive relevant content about the latest innovations in tech.

There was an error trying to subscribe to the newsletter. Please try again later.

Read more:
Creative AI: Best Examples of Artificial Intelligence In Photography, Art, Writing, Music and Gaming - TechTheLead

Democracy – The Recorder

Published: 5/7/2021 6:40:36 PM

There seems to be much gnashing of teeth about how the Charter Review Committee is destroying democracy by raising the bar for overturning City Council decisions. What actually destroys democracy is 300 people being able to hold an entire town hostage (and cost us an extra $500k+!) when a decision has been made by a democratically elected council.

Exhibit A: the anti-library petition that almost cost the town $10M in grant funds and backed up construction by most of a year.

Obviously there needs to be a way for citizens to petition their government, but it has to be a high-enough bar that a small group cant just stop our city from functioning. 1200/5% seems like a reasonable minimum, but perhaps give more than two weeks to gather the signatures.

If 300 people stopped the budget every year because of something they didnt like it in, we could quickly run into a constitutional/charter crisis.

Garth Shaneyfelt

Greenfield

See original here:
Democracy - The Recorder

Basecamp politics ban is reminder that the workplace isn’t a democracy – Business Insider

Over the last year, American workers have attempted to make their workplaces sites of social change and political discourse. Employees have fought for action, hoping the firms they work for will be agents in the fight against, among other things, systemic racism and harassment.

But recent changes at Basecamp, a workplace collaboration software company with approximately 60 employees, show why it is so hard to make American businesses respond to these problems. At the end of the day, the American workplace is not a democracy, it's an autocracy. In a democratic workplace, bosses would be accountable to the employees through a union or because employees held a significant number of seats on the corporate board, or, among other things, the law made it much more difficult to terminate employees. But in America, owners, managers, and bosses have the final say, and if political questions challenge their rule or even just inconvenience them they will be shut down.

On April 26, Basecamp cofounder and CEO Jason Fried and cofounder David Heinemeier Hansson posted a message on Fried's blog entitled "Changes at Basecamp." The post announced a suspension of employee benefits for gym memberships and farmer's market shares, but, more ominously, highlighted a new ban on political discussions at work and a dissolution of all committees.

Fried noted that discussions "related to politics, advocacy, or society at large" are "not healthy, [they haven't] served us well. And we're done with it at Basecamp." Fried added that the company could no longer dwell on past mistakes.

"Who's responsible for these changes?" Fried asked rhetorically, "David and I are. Who made the changes? David and I did." Fried and Hansson had unilaterally changed the workplace policies with a tone that could be read as hostile to disagreement. "The responsibility for negotiating use restrictions and moral quandaries returns to me and David," Fried wrote.

While the letter was vague about what had caused this policy change, a few days later, The Verge reported that the push came because what Fried construed as a political discussion really concerned a potential instance of workplace harassment.

In the last year or so, Basecamp employees had grown increasingly concerned about what was known as the "Best Names Ever" list a collection of Basecamp customer names that employees had presumably found funny. While the list included many Nordic or American names, it also included some names of apparent African and Asian descent. In the wake of the uprisings for racial equality in the last year and particularly the wave of anti-Asian violence, workers were demanding to know why this list, which both Fried and Hansson had known about since at least 2016, had festered for so long. Some employees had revived a dormant diversity, equity, and inclusion channel in order to address these and other concerns.

One employee cited the Anti-Defamation League's "pyramid of hate," suggesting that allowing this "Best Names Ever" list to exist was a dangerous precedent, and felt that Hansson and Fried should be held accountable. Hansson fired back in his own blog post saying that he thought this was an unfair argument and that this employee themself had tolerated the list. Two weeks later, on a Monday, Fried posted "Changes at Basecamp." After Friday's all-hands meeting, more than 20 employees resigned.

Much like a similar announcement by Coinbase, the uproar at Basecamp is an example of the reaction by bosses to workers' demands that workplaces address discrimination, harassment, and the political and structural factors that perpetuate racism, sexism, and xenophobia. Basecamp also demonstrates why addressing those issues in the workplace is so challenging in America. Companies are structured like an unaccountable totalitarian regime. Fried and Hansson, legally, have the power to end discussions. That is, they have the unilateral power to silence speech they don't like.

This seemed to be at odds with that fact that Basecamp, as a company, had been explicitly political in the past. They donated their office space in Chicago to a political candidate running for mayor, and the owners testified about Apple's monopolistic practices, and Fried even published an article in Inc. about Basecamp's failure and attempts to address workplace diversity.

None of this surprises University of Michigan philosophy professor Elizabeth Anderson, author of the book "Private Government: How Employers Rule Our Lives (and Why We Don't Talk about It)." In her book, she argues that while Americans aspire to democracy, most American workplaces are, structurally, dictatorships. Workers have little to no say in who is in charge of them and almost no free speech protections. Bosses can hire whomever they want, determine pay, control who does what work and when, and fire employees for almost any reason.

The latter is enabled by "at will" employment provisions, which give employers freedom to terminate workers. Our legal system is such that founders like Fried and Hansson are largely unaccountable to employees, unlike the situation in many other countries, like Germany, where it is much more difficult to fire employees, and the inclusion of workers in managerial decisions is often the norm. This often takes place via "workers councils" in which a certain number of seats on corporate boards are reserved for workers.

Researchers of the role of politics in the workplace note that increasing democracy in the workplace and giving workers a say in the rules that govern their conduct trains people for democratic life in general. New York University Law Professor Cynthia Estlund says that there used to be a more robust discussion 80 years ago about what was then called "industrial democracy," and about the workplace as a "school for democracy."

Unions were growing, and they fought for worker protections that limited the bosses' ability to unilaterally fire workers and dictate the terms of work. Such protections empowered workers to speak out against unfair, discriminatory, or harassing conduct in the workplace. Today, we spend most of our time at work, so it's no wonder that many workers want their workplaces to be sites of societal and political change, or at least be a place where people can talk freely about current issues.

Estlund said that there are additional benefits to worker protections for open political discussion: The workplace is one of the few places in life in which we engage with a relatively politically diverse group of people. Coworkers are generally not people we grew up with or freely choose to associate with. They are a "bridge to the larger citizenry," Estlund said. If we hope to create a less divided country and get outside our ideological bubbles, "it's mainly in the workplace that we actually interact on a sustained basis with once-strangers."

In fact, the workplace protections against racial harassment that sprang up in the post-war period may have been violated at Basecamp, Anderson told me in an email.

"All employers are legally obligated to act against racial harassment including hostile environment harassment that need not target an identified employee," she wrote. "So the racist spreadsheet is clearly covered by already existing requirements. Instead, Basecamp really wanted to shut down criticism of Basecamp's racist working conditions, even though labor law clearly protects the right of workers to complain about working conditions, even if they are not organized into a union."

It's unclear why exactly employers prefer this top-down arrangement that is so opposed to the values of American life, though for Fried and Hansson the benefit is clear. They alone can end a discussion that implicates their conduct. Instead of engaging with what they found to be a bad argument and find a path forward, they shut down the discussion completely. And the result was catastrophic for the company not only for how they look, but because they lost more than a third of their employees, suggesting that leaning on authoritarian tactics is detrimental for retention.

To make the workplace more democratic, we could, among other things, strengthen laws and norms protecting employment, make cooperative ownership easier, dramatically bolster unionization and collective bargaining, and give workers a say in managerial decisions. But until then, firms' prerogatives will only reflect a minority of opinions (a minority that skews heavily white and male) and workers' voices will continue to be silenced.

And as calls by workers for their firms to be agents of social change increase for businesses to take stances on systemic racism, the climate emergency, and to make the workplace free of harassment Basecamp demonstrates why that is so difficult in America. Without any legal accountability or widespread union representation, change will only happen at the whim of owners and managers.

Robin Kaiser-Schatzlein writes about economic life in America.

View original post here:
Basecamp politics ban is reminder that the workplace isn't a democracy - Business Insider