Join me @ IBOtoolbox for free.
Vlad Tverdohleb
Member Since: 7/26/2015
  
performance / stats
Country: Canada
Likes Received: 151
Featured Member: 1 times
Associates: 155
Wall Posts: 382
Comments Made: 63
Press Releases: 370
Videos: 0
Phone: 015144814545
Skype:     theprservices
profile visitor stats
TODAY: 30
THIS MONTH: 934
TOTAL: 72197
are we ibo associates?
recent videos
member advertising
none
active associates
Mark Turnbull    
Last logged on: 6/25/2019


Bill Bateman     
Last logged on: 6/25/2019


Brian Stefan    
Last logged on: 6/25/2019


Curtiss Martin    
Last logged on: 6/25/2019


Linda Michel White       
Last logged on: 6/25/2019


Brandon J Urquhart I    
Last logged on: 6/25/2019


Cosmos Parris    
Last logged on: 6/25/2019


Crypto Vend    
Last logged on: 6/25/2019


Chizoba Nworjih    
Last logged on: 6/25/2019


Emmanuel Mba    
Last logged on: 6/25/2019


Jimmy Diggs  
Last logged on: 6/25/2019


Neil Kinch    
Last logged on: 6/25/2019


Elena Garas    
Last logged on: 6/25/2019


Bob & Shirley Rushing    
Last logged on: 6/25/2019


Todd Treharne    
Last logged on: 6/25/2019


other ibo platforms
Vlad Tverdohleb   My Press Releases

Top AI Experts Vow They Won’t Help Create Lethal Autonomous Weapons

Published on 7/19/2018
For additional information  Click Here

AI FOR GOOD. Artificial intelligence (AI) has the potential to save lives by predicting natural disastersstopping human trafficking, and diagnosing deadly diseases. Unfortunately, it also has the potential to take lives. Efforts to design lethal autonomous weapons — weapons that use AI to decide on their own whether or not to attempt to kill a person — are already underway.

On Wednesday, the Future of Life Institute (FLI) — an organization focused on the use of tech for the betterment of humanity — released a pledge decrying the development of lethal autonomous weapons and calling on governments to prevent it.

“AI has huge potential to help the world — if we stigmatize and prevent its abuse,” said FLI President Max Tegmark in a press release. “AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

JUST SIGN HERE. One hundred and seventy organizations and 2,464 individuals signed the pledge, committing to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” Signatories of the pledge include OpenAI founder Elon Musk, Skype founder Jaan Tallinn, and leading AI researcher Stuart Russell.

The three co-founders of Google DeepMind (Demis Hassabis, Shane Legg, and Mustafa Suleyman) also signed the pledge. DeepMind is Google’s top AI research team, and the company recently saw itself in the crosshairsof the lethal autonomous weapons controversy for its work with the U.S. Department of Defense.

In June, Google vowed it would not renew that DoD contract, and later, it released new guidelines for its AI development, including a ban on building autonomous weapons. Signing the FLI pledge could further confirm the company’s revised public stance on lethal autonomous weapons.

ALL TALK? It’s not yet clear whether the pledge will actually lead to any definitive action. Twenty-six members of the United Nations have already endorsed a global ban on lethal autonomous weapons, but several world leaders, including Russia, the United Kingdom, and the United States, have yet to get on board.

This also isn’t the first time AI experts have come together to sign a pledge against the development of autonomous weapons. However, this pledge does feature more signatories, and some of those new additions are pretty big names in the AI space (see: DeepMind).

Read more here.

Member Note: To comment on this PR, simply click reply on the owners main post below.
-  Copyright 2016 IBOsocial  -            Part of the IBOtoolbox family of sites.