Media Search:



Amazons Werner Vogels: Enterprises are more daring than you might think – Protocol

When AWS unveiled Lambda in 2014, Werner Vogels thought the serverless compute service would be the domain of young, more tech-savvy businesses.

But it was enterprises that flocked to serverless first, Amazons longtime chief technology officer told Protocol in an interview last week.

For them, it was immediately obvious what the benefits were and how you only pay for the five microseconds that this code runs, and any idle is not being charged to you, Vogels said. And you don't have to worry about reliability and security and multi-[availability zone] and all these things that then go out of the window. That was really an eye-opener for me this idea that we sometimes have in our head that sort of the young businesses are more technologically advanced and moving faster. Clearly in the area of serverless, that was not the case.

AWS Lambda launched into general availability in 2015, and more than a million customers are using it today, according to AWS.

Vogels gave Protocol a rundown on AWS Lambda and serverless computing, which allows customers to build and run applications and services without provisioning or managing servers. He also talked about Amazon CodeWhisperer, AWS new machine learning-powered coding tool, launched in preview in June; how artificial intelligence and ML are changing developers lives; and his thoughts on AWS providing customers with primitives versus higher-level managed services.

This interview has been edited and condensed for clarity.

So what's the state of the state on AWS Lambda and how it's helping customers, and are there any new features that we can expect?

You'll see a whole range of different migrations happening. We've had folks from Capital One that migrated old mainframe codes to Lambda. [IRobot, which Amazon announced plans to acquire on Friday], the folks that make Roomba, the automatic [vacuum] cleaner, have their complete back end running as serverless because, for example, that's a service that their customers don't pay for, and as such, they really wanted to minimize their costs yet provide a good service. There's a whole range of different projects happening and whether that is pre-processing images at some telescope deep in Chile, all the way up to monitoring Snowcones running in the International Space Station, where they were in Lambda on that device as well and actually can do processing of imagery and things like that. It's become quite pervasive in that sense.

Now, the one thing is, of course, if you have existing code, and you want to move over to the cloud moving over to a virtual machine is easy it's all in the same environment that you had on-premises. If you want to decompose the application that you had, don't want to do too many code changes, probably containers are a better target for that.

But for quite a few of our customers that really want to start from scratch, but sort of really innovate and really think about [what] event-driven architectures look like, serverless becomes quickly the sudden default target for them. Mostly also because it's not only that we see significant reduction in cost for our customers, but also a significant reduction in their carbon footprints, because we're able to do much better packing on energy than customers would be able to do by themselves. We now also run serverless on our Graviton processors, so you'll see easily a 40% reduction in cost in energy usage.

For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs.

But always I'm a bit ambivalent about the word serverless, mostly because many people associate that with when we launched Lambda. But in essence, the first service that we launched, S3, also is really serverless. For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs. And so, in essence, almost all services at AWS are serverless by nature. If you think about DynamoDB [a serverless NoSQL database], or if you think about Neptune [a graph database service] or any of the other services that we have, most of them are serverless because you don't have to think about sort of provisioning them, managing them. That's all done for you.

Can you talk about the value of CodeWhisperer and what you think is the next big thing for or the future of low-code/no-code?

For me, CodeWhisperer is more an assistant to a developer. There's a number of application areas where I think machine learning really shines and it is sort of augmenting professionals by helping them, taking away mundane tasks. And we already did that, of course, in AWS. If you think about development, there's CodeGuru and DevOps Guru, which are both already machine-learning services to help customers with, on one hand, operations, and the other one sort of doing the early security checks during the development process.

CodeWhisperer even takes that a step further, where if you look how our developers develop, there's quite a few mundane tasks where you will go search on the web for a piece of code how do we do [single sign-on] login into X, Y or Z? Most people will just cut and paste or do a little translation. If that was in Python and you need to actually write it in TypeScript, we may do a translation on that.

There's a lot of work, actually, that developers do in that particular area. So we thought that we could really help our customers there by using machine learning to look at the complete base of, on one hand, the AWS code, the Amazon code and all the open-source code that is out there, and then do a qualitative test on that, and then include it into this body of work where we can easily help customers by just writing some plain text, and then saying, I want a [single sign-on] log-on here, and then the code automatically appears. And with that, we can do checks for security, we can do checks for bias. There's lots of other things that are now possible because we're basically assisting the developer in being more efficient and actually writing the code that they really want to write.

When we launched Lambda, I said the only code that will be written in the future is business logic. Well, it turns out we're still not completely there, but tools like CodeWhisperer definitely help us to get on that path because you can focus on what's the unique code that you need to write for the application that you have, instead of the same code that everybody else needs to write.

People really like it. It's also something that we continuously improve. This is not a standing-still product. As we look at more code, as we get more feedback, the service improves.

If I think about software developers, it's one of the few jobs in the world where you can be truly creative and can go to work and create something new every morning. However, there's quite a bit of heavy lifting still around that [that] sort of has nothing to do with your creativity or your ability to solve problems. With CodeWhisperer, we really tried to take the heavy lifting away so that people can focus on the creativity part of the development job, and I think anything we can do there, developers like.

In your tech predictions for 2022, you said this is the year when artificial intelligence and machine learning take on the undifferentiated heavy lifting in the lives of developers. Can you just expand on that, and how AWS is helping that?

When you think about CodeWhisperer and CodeGuru and DevOps Guru or Copilot from GitHub this is just the beginning of seeing the application area of machine learning to augment humans. Whether there is a radiologist somewhere that is late at night looking at imagery and gets help from machine learning to compare these images or whether it's a developer, we're really at the cusp of how machine learning will accelerate the way that we can build digital systems.

I was in Germany not that long ago, and there the government told me that they have 80,000 open IT positions. With all the scarceness in the world of labor, anything which we can do to make the life of developers easier so that they're more productive, that it makes it easier for people that do not have a four-year computer science degree to actually get started in the IT world, anything we can do there will benefit all the enterprises in the world.

What's another developer problem that you're trying to solve, or what are developers asking AWS for?

If you're an organization like AWS or Amazon or quite a few other organizations around the world, you make use of the DevOps principle, where basically your developers also have operational tasks. If you do operations, there's information that is coming from 10 or 20 different sides. There's log files, there's metrics, there's dashboards and actually tying that information together and analyzing the massive amounts of log files that are being produced by systems in real time, surfacing that to the operators, showing that there may be potential problems here and then give context around it because normally these log files are pretty cryptic. So what we do with DevOps Guru, for example, is provide context around it such that the operators can immediately start taking action, looking for what [the] root cause of particular problems are. So we're looking at all of the different aspects of development and operations to see what are the kind of things that we can build to help customers there.

At AWS re:Invent last year, you put up a slide that read primitives, not frameworks, and you said AWS gives customers primitives or simple machines, not frameworks. Meanwhile, Google Cloud and Microsoft are offering these sort of larger, chunkier blocks such as managed services where customers don't have to do the heavy lifting, and AWS also seems to be selling more of them as well.

Let me clarify that. It mostly has to do also with sort of the speed of innovation of AWS.

Last year, we launched more than 3,000 features and services. And so why are we still looking at these fine-ingrained building blocks? Let me go back to the beginning of AWS when we started then, how software companies at that moment were providing infrastructure or platforms was basically that they would give developers everything [but] the kitchen sink on Day One. And they would tell you, "This is how you shall develop software on this platform." Given that these platforms took quite a while to develop, basically what you operate is a platform that is already five years old, that is looking at five years back.

Werner Vogels gives his keynote at AWS re:Invent 2021. Photo: Amazon Web Services, Inc.

We knew that if cloud would really be effective, development would change radically. Development would indeed be able to scale quicker and make use of multiple availability zones and many different types of databases and things like that. So we needed to make sure that we were not building things from the past, but that we were building for how our customers would want to build in 2025. To do that, you don't give them everything and tell them what to do. You give them small building blocks, and that's what I mean by primitives. And all these small building blocks together make a very rich ecosystem for developers to choose from.

Now, quite a few, especially the more tech-savvy companies, are more than happy to put these building blocks together themselves. For example, if you want to build a data lake, we have to use Glue [a serverless data integration service], we have to use S3, maybe some Redshift, Kinesis for ingestion, Athena for ad hoc analytics. I think there's quite a few customers that are building these things by themselves.

But then there's a whole category of customers that just want a data lake. They don't want to think about Glue and S3 and Kinesis, so we give them a service or solution called Lake Formation. That automatically grabs all these things together and gives them this higher-level component.

Now the fact that we are delivering these higher-level solutions, for example, some customers just want a backup solution, and they don't want to think about how to move things into S3 and then do some intelligent tiering [so] that if this data isn't accessed in two weeks, then it is being moved into cold storage. They don't want to think about that. They just want a backup solution. And so for that, we provide them some backup. So we do have these higher-level services. It's more managed-style services for you, but they're all still based on the primitives that sit underneath there. So whether you want to start with Lake Formation and later on maybe start tweaking things under the covers, that's still possible for you. While we are providing these higher-level components, where customers need to have less worry about which components can fit together, we still provide the underlying components to the developers as well.

Is quantum computing something that enterprise CTOs should be keeping their eye on? Do you expect there to be an enterprise use for it, or will it be a domain just for researchers, or is it just too far out to surmise?

There is a back-and-forth there. If I look at some of the newer developments, it's clearly research oriented. The reason for us to provide Braket, which is our quantum compute service, is that customers generally start experimenting with the different types of hardware that are out there. And there's typical usage there. It's life sciences, it's oil and gas. All of these companies are already investigating whether they could see significant speed-ups if they would transform their algorithms into things that could run on a quantum machine.

Now, there's a major difference between, let's say, traditional development and quantum development. The tools, the compilers, the software principles, the books, the documentation for traditional development that's huge, you need great support.

In quantum, I think what we'll see in the coming four or five years, as I listen to the Amazon researchers working on this, [is that] much of the work will not only go into hardware, but also how to provide better software support around it, such that development for these types of machines becomes easier or even goes at the same level as traditional machines. But one of the things that I think is very, very clear is that we're not going to be able to solve new problems necessarily with quantum computing; we're just going to be able to solve old problems much, much faster. That's why the life sciences companies and health care and companies that are very interested in the high-performance compute are experimenting with quantum because that could accelerate their algorithms, maybe by orders of magnitude. But, we still have to see the results of that. So I'm keeping a very close eye on it, because I think there may be very interesting workloads and application areas in the future.

Read more:
Amazons Werner Vogels: Enterprises are more daring than you might think - Protocol

FM holds talks with US NSF chief, discusses collaboration in science and technology – Devdiscourse

Finance Minister Nirmala Sitharaman on Sunday met Director of the US National Science Foundation (NSF) Sethuraman Panchanathan and discussed fostering ties in domains such as artificial intelligence, space, agriculture and health. The two sides discussed areas of collaboration related to science and technology (S&T) which emerged during the meeting between Prime Minister Narendra Modi and US President Joe Biden during the QUAD Summit in Tokyo in May, the finance ministry said in a series of tweets. ''Both sides emphasised to further enhance and strengthen the time-tested, democratic & value-based mutual partnership in specific domains such as artificial intelligence data science quantum computing, space, agriculture and health,'' it added. Panchanathan indicated that many projects will be launched soon in association with the Department of Science and Technology under six technology innovation hubs. ''While talking about the mission and objectives of @NSF, @DrPanch elaborated on achieving innovation at speed and scale with inclusion and solution-based approach in research,'' the ministry tweeted. ''Union Finance Minister Smt. @nsitharaman talked about India's achievement in fostering innovation through #AtalInnovationMission, #Start-upIndia, #StandUpIndia, reforms in patent processes and advancement of appropriate technology in agriculture,'' it added.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

See the rest here:
FM holds talks with US NSF chief, discusses collaboration in science and technology - Devdiscourse

CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 – HPCwire

A new version of a standard backed by major cloud providers and chip companies could change the way some of the worlds largest datacenters and fastest supercomputers are built.

The CXL Consortium on Tuesday announced a new specification called CXL 3.0 also known as Compute Express Link 3.0 that eliminates more chokepoints that slow down computation in enterprise computing and datacenters.

The new spec provides a communication link between chips, memory and storage in systems, and it is two times faster than its predecessor called CXL 2.0.

CXL 3.0 also has improvements for more fine-grained pooling and sharing of computing resources for applications such as artificial intelligence.

CXL 3.0 is all about improving bandwidth and capacity, and can better provision and manage computing, memory and storage resources, said Kurt Lender, the co-chair of the CXL marketing work group (and senior ecosystem manager at Intel), in an interview with HPCwire.

Hardware and cloud providers are coalescing around CXL, which has steamrolled other competing interconnects. This week, OpenCAPI, an IBM-backed interconnect standard, merged with CXL Consortium, following the footsteps of Gen-Z, which did the same in 2020.

CXL released the first CXL 1.0 specification in 2019, and quickly followed it up with CXL 2.0, which supported PCIe 5.0, which is found in a handful of chips such as Intels Sapphire Rapids and Nvidias Hopper GPU.

The CXL 3.0 spec is based on PCIe 6.0, which was finalized in January. CXL has a data transfer speed of up to 64 gigatransfers per second, which is the same as PCIe 6.0.

The CXL interconnect can link up chips, storage and memory that are near and far from each other, and that allows system providers to build datacenters as one giant system, said Nathan Brookwood, principal analyst at Insight 64.

CXLs ability to support the expansion of memory, storage and processing in a disaggregated infrastructure gives the protocol a step-up over rival standards, Brookwood said.

Datacenter infrastructures are moving to a decoupled structure to meet the growing processing and bandwidth needs for AI and graphics applications, which require large pools of memory and storage. AI and scientific computing systems also require processors beyond just CPUs, and organizations are installing AI boxes, and in some cases, quantum computers, for more horsepower.

CXL 3.0 improves bandwidth and capacity with better switching and fabric technologies, CXL Consortiums Lender said.

CXL 1.1 was sort of in the node, then with 2.0, you can expand a little bit more into the datacenter. And now you can actually go across racks, you can do decomposable or composable systems, with the fabric technology that weve brought with CXL 3.0, Lender said.

At the rack level, one can make CPU or memory drawers as separate systems, and improvements in CXL 3.0 provide more flexibility and options in switching resources compared to previous CXL specifications.

Typically, servers have a CPU, memory and I/O, and can be limited in physical expansion. In disaggregated infrastructure, one can take a cable to a separate memory tray through a CXL protocol without relying on the popular DDR bus.

You can decompose or compose your datacenter as you like it. You have the capability of moving resources from one node to another, and dont have to do as much overprovisioning as we do today, especially with memory, Lender said, adding its a matter of you can grow systems and sort of interconnect them now through this fabric and through CXL.

The CXL 3.0 protocol uses the electricals of the PCI-Express 6.0 protocol, along with its protocols for I/O and memory. Some improvements include support for new processors and endpoints that can take advantage of the new bandwidth. CXL 2.0 had single-level switching, while 3.0 has multi-level switching, which provides more latency on the fabric.

You can actually start looking at memory like storage you could have hot memory and cold memory, and so on. You can have different tiering and applications can take advantage of that, Lender said.

The protocol also accounts for the ever-changing infrastructure of datacenters, providing more flexibility on how system administrators want to aggregate and disaggregate processing units, memory and storage. The new protocol opens more channels and resources for new types of chips that include SmartNICs, FPGAs and IPUs that may require access to more memory and storage resources in datacenters.

HPC composable systems youre not bound by a box. HPC loves clusters today. And [with CXL 3.0] now you can do coherent clusters and low latency. The growth and flexibility of those nodes is expanding rapidly, Lender said.

The CXL 3.0 protocol can support up to 4,096 nodes, and has a new concept of memory sharing between different nodes. That is an improvement from a static setup in older CXL protocols, where memory could be sliced and attached to different hosts, but could not be shared once allocated.

Now we have sharing where multiple hosts can actually share a segment of memory. Now you can actually look at quick, efficient data movement between hosts if necessary, or if you have an AI-type application that you want to hand data from one CPU or one host to another, Lender said.

The new feature allows peer-to-peer connection between nodes and endpoints in a single domain. That sets up a wall in which traffic can be isolated to move only between nodes connected to each other. That allows for faster accelerator-to-accelerator or device-to-device data transfer, which is key in building out a coherent system.

If you think about some of the applications and then some of the GPUs and different accelerators, they want to pass information quickly, and now they have to go through the CPU. With CXL 3.0, they dont have to go through the CPU this way, but the CPU is coherent, aware of whats going on, Lender said.

The pooling and allocation of memory resources is managed by a software called Fabric Manager. The software can sit anywhere in the system or hosts to control and allocate memory, but it could ultimately impact software developers.

If you get to the tiering level, and when you start getting all the different latencies in the switching, thats where there will have to be some application awareness and tuning of application. I think we certainly have that capability today, Lender said.

It could be two to four years before companies start releasing CXL 3.0 products, and the CPUs will need to be aware of CXL 3.0, Lender said. Intel built in support for CXL 1.1 in its Sapphire Rapids chip, which is expected to start shipping in volume later this year. The CXL 3.0 protocol is backward compatible with the older versions of the interconnect standard.

CXL products based on earlier protocols are slowly trickling into the market. SK Hynix this week introduced its first DDR5 DRAM-based CXL (Compute Express Link) memory samples, and will start manufacturing CXL memory modules in volume next year. Samsung has also introduced CXL DRAM earlier this year.

While products based on CXL 1.1 and 2.0 protocols are on a two-to-three-year product release cycle, CXL 3.0 products could take a little longer as it takes on a more complex computing environment.

CXL 3.0 could actually be a little slower because of some of the Fabric Manager, the software work. Theyre not simple systems when you start getting into fabrics, people are going to want to do proof of concepts and prove out the technology first. Its going to probably be a three-to-four year timeframe, Lender said.

Some companies already started work on CXL 3.0 verification IP six to nine months ago, and are finetuning the tools to the final specification, Bender said.

The CXL has a board meeting in October to discuss the next steps, which could also involve CXL 4.0. The standards organization for PCIe, called the PCI-Special Interest Group, last month announced it was planning PCIe 7.0, which increases the data transfer speed to 128 gigatransfers per second, which is double that of PCIe 6.0.

Lender was cautious about how PCIe 7.0 could potentially fit into a next-generation CXL 4.0. CXL has its own set of I/O, memory and cache protocols.

CXL sits on the electricals of PCIe so I cant commit or absolutely guarantee that [CXL 4.0] will run on 7.0. But thats the intent to use the electricals, Lender said.

Under that case, one of the tenets of CXL 4.0 will be to double the bandwidth by going to PCIe 7.0, but beyond that, everything else will be what we do more fabric or do different tunings, Lender said.

CXL has been on an accelerated pace, with three specification releases since its formation in 2019. There was confusion in the industry on the best high-speed, coherent I/O bus, but the focus has now coagulated around CXL.

Now we have the fabric. There are pieces of Gen-Z and OpenCAPI that arent even in CXL 3.0, so will we incorporate those? Sure, well look at doing that kind of work moving forward, Lender said.

Link:
CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 - HPCwire

Palestinian Islamic Jihads rocket barrages on Israel trace back to ‘Iran’s regional tentacles,’ experts say – Fox News

NEWYou can now listen to Fox News articles!

Theworlds leading state-sponsor of terrorism the Islamic Republic of Iran is behind Palestinian Islamic Jihad's firing of over 600 rockets at communities in Israel over the weekend, according to top military experts.

Brig. Gen. (res.) Amir Avivi, a formerdeputy commander of theIsrael Defense ForcesGaza Division,told Fox News Digital that there is "more than a slight possibility that Iran ordered [PIJ attacks] or it was supported by Iran."

Israel launched a preemptive missionOperation Breaking Dawnon Friday to stop "PIJ from firing anti-tank missiles from northern Gaza into Israel," said Avivi.

Avivi stressed the importance of connecting the dots between Irans regime and its proxies in the region. "It was no accident that thePIJ [attacks] happened as the secretary-general of PIJ, Ziyad al-Nakhalah, was meeting with [the Islamic Republics President Ebrahim] Raisi in Iran."

IRANIAN OFFICIALS OPENLY DISCUSS COUNTRY'S ABILITY TO MAKE NUCLEAR WEAPON

Raisi, who was sanctioned by the Trump administration for his role in the massacre of Iranian dissidents and protestors, said regarding the current violence that Israel has "once again showed its occupying and aggressive nature to the world."

Maj. Gen. Hossein Salami, the head of Irans Islamic Revolutionary Guard Corps, was quoted by the Sepah News website as saying on Saturday: "Today, all the anti-Zionist jihadi capabilities are on the scene in a united formation working to liberate Jerusalem and uphold the rights of the Palestinian people."

The Trump administration designated the IRGC a foreign terrorist organization. The IRGC and its militias in the Middle East are responsible for the murder of over 600 American service personnel.

President Donald Trump speaks during a press briefing with the coronavirus task force, in the Brady press briefing room at the White House, Monday, March 16, 2020, in Washington. (AP Photo/Evan Vucci)

Avivi said that he sees a connection among Irans aggressive posture at the nuclear talks in Vienna; the hostile statements of the pro-Iran regime leader of Hezbollah, Hassan Nasrallah, against a Lebanon-Israel maritime deal; and the PIJ jingoism.

"This is an indication of Irans regional tentacles and how dangerous they are and how they can destabilize the area. Israel needs to remain strong and resolute to deter Irans advance," Avivi said.

Both Republican and Democratic administrations have classified Irans regime as the worlds worst state-sponsor of terrorism.

STATE DEPT OFFICIAL: IRAN'S COOPERATION AGREEMENT WITH CHINA 'IS DEFINITELY NOT GOOD FOR THE REGION'

"As the nuclear negotiations were going on in Vienna, PIJs leader was meeting with the Iranian president, the head of the IRGC and others in Tehran, taking orders for terrorist attacks against Israel," Col. (ret.) Richard Kemp, who commanded the British troops in Afghanistan, told Fox News Digital. "PIJ is an Iranian proxy and is funded by Iran to the tune of hundreds of millions of dollars. Instead of discussing sanctions relief and normalization with Iran, the West should be doing everything it can to cut off the cash flow to all its terrorist proxies."

If, as the Biden White House desires, a renewed nuclear deal is reached, the administration is slated, according to media reports, to deliver over $100 billion to the Islamic Republic as part of sanctions relief in exchange for temporary restrictions on Irans nuclear program.

This image taken from video footage aired by Iranian state television on Tuesday, March 8, 2022, shows the launch of a rocket by Iran's Revolutionary Guard carrying a Noor-2 satellite in northeastern Shahroud Desert, Iran. (Iranian state television via AP)

The theocratic state declared last week that it can develop atomic weapons.

Kemp continued, "The scandal of the negotiations with Iran is summed up by the fact that the representative of Russia, which is assaulting Ukraine while threatening the world with nuclear attack, and the representative of Iran, whose proxy is viciously attacking Israel, have been meeting with British, EU and other world powers, with American endorsement, as though none of this is happening. Such immoral behavior might be excusable if these negotiations could lead to prevention of Irans nuclear capability. But even in the best case they can only achieve the opposite: paving the way to Iran becoming a legitimate nuclear armed state while also enabling Iranian terrorist aggression across the region."

"The decision making of the PIJ changed and deviated from the rules of engagement," Avivi, who serves as the CEO of the Israel Defense and Security Forum, said. "The IDF did not really do anything different. The IDF has [long] been conducting arrests [of terrorists] in Judea of Samaria."

IRANIAN OFFICIALS OPENLY DISCUSS COUNTRY'S ABILITY TO MAKE NUCLEAR WEAPON

Based on IDF intelligence, the Islamic Jihad deviated from the standard rules of engagement because it was set to fire anti-tank missiles into Israel. "This was the first time that PIJ planned to take action [in reaction] to something that happened in Judea and Samaria," a reference to Israels arrest of the head of Islamic Jihad in the West Bank last Monday.

The former British commander, Kemp,said Israels attack to knock out the PIJ leadership is justified.

"Israel had no choice other than to launch a preemptive strike against PIJ to prevent an imminent lethal attack on Israeli civilians. Their initial military operation and subsequent strikes to stop terrorist rocket fire were lawful, necessary and proportionate. Israel operates within international law at all times, while every rocket launch by PIJ is a double war crime, attacking from behind human shields and firing indiscriminately at civilians," Kemp said.

In this photo released by an official website of the office of the Iranian supreme leader, Supreme Leader Ayatollah Ali Khamenei attends a meeting with Iranian officials, participants of the 31st International Islamic Unity Conference and ambassadors from Islamic countries, in Tehran, Iran, Wednesday, Dec. 6, 2017. (Office of the Iranian Supreme Leader via AP)

The IDF on Saturday tweeted video footage of PIJ misfiring missiles into a Palestinian civilian area in the Gaza Strip. "Watch this failed rocket launch which killed children in Gaza," the IDF wrote. "This barrage of rockets was fired by the Islamic Jihad terrorist organization in Gaza last night. The rocket in the red circle misfired, killing Palestinian civiliansincluding childrenin Jabaliya in northern Gaza."

The Gaza Strip is controlled by Hamas, which the U.S. and the European Union have designated a terrorist organization. Similarly, the U.S. and EU classify PIJ as a terrorist entity. Irans chief strategic partner in Lebanon, Hezbollah, is also a US-designated terrorist organization.

PELOSI TAIWAN TRIP: IRAN, SYRIA JOIN LIST OF COUNTRIES CONDEMNING CONTROVERSIAL VISIT

PIJ rockets on Sunday caused warning sirens to sound for the first time in the current round of violence in the capital of Israel, Jerusalem.

Israels government said 450-470 rockets fired by PIJ entered Israel and 120 failed in their mission and crashed into the Gaza Strip.

Israels sophisticated Iron Dome missile defense system intercepted 97% of the PIJ rockets heading toward civilian population centers.

CLICK HERE TO GET THE FOX NEWS APP

According to the Eshkol Regional Council, a PIJ missile struck a home in a community under its jurisdiction near the southern Gaza Strip. The family was in a bomb shelter at the time.

Benjamin Weinthalreports on Middle East affairs. You can follow Benjamin Weinthal on Twitter @BenWeinthal.

Read this article:
Palestinian Islamic Jihads rocket barrages on Israel trace back to 'Iran's regional tentacles,' experts say - Fox News

Negotiators optimistic about progress on Iran nuclear deal – ABC News

VIENNA -- Top negotiators in renewed talks to revive the 2015 Iran nuclear deal indicated Sunday that they are optimistic about the possibility of reaching an agreement to impose limits on Tehran's uranium enrichment.

We stand 5 minutes or 5 seconds from the finish line, Russian Ambassador Mikhail Ulyanov told reporters outside Viennas Palais Coburg, four days into the talks. He said there are 3 or 4 issues left to be resolved.

They are sensitive, especially for Iranians and Americans, Ulyanov said. I cannot guarantee, but the impression is that we are moving in the right direction.

Enrique Mora, the European Unions top negotiator, also said he is absolutely optimistic about the talks progress so far.

We are advancing, and I expect we will close the negotiations soon, he told Iranian media.

Negotiators from Iran, the U.S. and the European Union resumed indirect talks over Tehrans tattered nuclear deal Thursday after a months-long standstill in negotiations.

Since the deals de facto collapse, Iran has been running advanced centrifuges and rapidly growing its stockpile of enriched uranium.

Iran struck the nuclear deal in 2015 with the U.S., France, Germany, Britain, Russia and China. The deal saw Iran agree to limit its enrichment of uranium under the watch of U.N. inspectors in exchange for the lifting of economic sanctions.

Then U.S. President Donald Trump unilaterally pulled the U.S. out of the accord in 2018, saying he would negotiate a stronger deal, but that didnt happen. Iran began breaking the deals terms a year later.

See more here:
Negotiators optimistic about progress on Iran nuclear deal - ABC News