FAQFAQ   SearchSearch   MemberlistMemberlist  Chat Chat  UsergroupsUsergroups  CalendarCalendar RegisterRegister   ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Assassination: Boston brakes car crash, heart attack, cancer
Goto page Previous  1, 2
 
Post new topic   Reply to topic    9/11, 7/7 & the War on Freedom Forum Index -> Unexplained Deaths, 'Suicidings', 'Accidents', Plots & Assassinations
View previous topic :: View next topic  
Author Message
TonyGosling
Editor
Editor


Joined: 25 Jul 2005
Posts: 15983
Location: St. Pauls, Bristol, England

PostPosted: Fri Aug 25, 2017 12:17 am    Post subject: Reply with quote

'Self-driving' lorries to be tested on UK roads
http://www.bbc.co.uk/news/technology-41038220
Small convoys of partially driverless lorries will be tried out on major British roads by the end of next year, the government has announced.
A contract has been awarded to the Transport Research Laboratory (TRL) to carry out the tests of vehicle "platoons".
Up to three lorries will travel in formation, with acceleration and braking controlled by the lead vehicle.
But the head of the AA said platoons raised safety concerns.
The TRL will begin trials of the technology on test tracks, but these trials are expected to move to major roads by the end of 2018.
The lead vehicle in the platoons will be controlled by a human driver and humans will also control the steering in lorries to the rear - though acceleration and braking will be mirrored.
The government has been promising such a project since at least 2014.
Last year, for example, it announced its intention to carry out platooning trials but was later frustrated after some European lorrymakers declined to participate.
A Department of Transport spokesman told the BBC that the experiments are now expected to go ahead as the contract had been awarded.
The TRL has announced its partners for the project:
DAF Trucks, a Dutch lorry manufacturer
Ricardo, a British smart tech transport firm
DHL, a German logistics company
Platooning has been tested in a number of countries around the world, including the US, Germany and Japan.
However, British roads present a unique challenge, said Edmund King, president of the AA.
"We all want to promote fuel efficiency and reduce congestion but we are not yet convinced that lorry platooning on UK motorways is the way to go about it," he said, pointing out, for example, that small convoys of lorries can block road signs from the view of other road users.
"We have some of the busiest motorways in Europe with many more exits and entries.
"Platooning may work on the miles of deserted freeways in Arizona or Nevada but this is not America," he added.
Transport Minister Paul Maynard said platooning could lead to cheaper fuel bills, lower emissions and less congestion.
"But first we must make sure the technology is safe and works well on our roads, and that's why we are investing in these trials," he said.

_________________
www.lawyerscommitteefor9-11inquiry.org
www.rethink911.org
www.patriotsquestion911.com
www.actorsandartistsfor911truth.org
www.mediafor911truth.org
www.pilotsfor911truth.org
www.mp911truth.org
www.ae911truth.org
www.rl911truth.org
www.stj911.org
www.v911t.org
www.thisweek.org.uk
www.abolishwar.org.uk
www.elementary.org.uk
www.radio4all.net/index.php/contributor/2149
http://utangente.free.fr/2003/media2003.pdf
"The maintenance of secrets acts like a psychic poison which alienates the possessor from the community" Carl Jung
https://37.220.108.147/members/www.bilderberg.org/phpBB2/
Back to top
View user's profile Send private message Visit poster's website MSN Messenger
TonyGosling
Editor
Editor


Joined: 25 Jul 2005
Posts: 15983
Location: St. Pauls, Bristol, England

PostPosted: Mon Aug 28, 2017 7:32 am    Post subject: Reply with quote

And guess what? Yes. It's an (ex?) Nazi company!

Here’s how Bosch teaches cars to see using artificial intelligence
[Digital Trends]
Ronan Glon
Digital Trends27 August 2017
https://uk.news.yahoo.com/bosch-teaches-cars-see-using-001515490.html

A car needs to be able to see its environment before it can drive itself. Don’t let pareidolia fool you; its headlights aren’t eyes. They are made out of metal, glass, and plastic parts, and they rely on an enormous amount of computing power to remain open. Without them, the car’s brain isn’t able to make the right decision at the right time.

AI by Bosch is the brains behind many self-driving car platforms, and to see how it works — and how a car sees the world around it — the company gave us a chance to briefly explore the German countryside in one of its prototypes. It turns out Andy Warhol and the future of mobility have more in common than you might think.
Virtual 20/20 vision

The prototype we’re riding shotgun in looks like a garden-variety BMW 3 Series station wagon when you see it from the outside. There are tens of thousands of them on German roads, so what makes this one special? After settling into the leather-upholstered passenger seat we notice it’s decked out with cameras, sensors, and radars that are attached to the windshield, though we’re told only the monocular camera is turned on during our trip. There is also an additional panel on the center console with various input ports, and a tablet mounted on the dashboard.

The technology is relatively simple – at least on paper. The windshield-mounted camera records footage and sends it to a PC stuffed in the Bimmer’s trunk. The information goes through a graphic processing unit (GPU) manufactured by Nvidia before traveling to the car’s on-board brain. The tablet on the dashboard is only there for demonstration purposes.

Artificial intelligence helps our prototype split up the outside world into 19 categories. Each one is identified by a different color, which creates a Pop Art-like view of what’s ahead. It knows the difference between a street and a sidewalk, and it can identify various objects such as traffic signs, traffic lights, pedestrians, and different types of vehicles including cars, trucks, and bicycles. Much like a human driver, the car recognizes which objects are safe to drive over and which ones it needs to brake for.

It’s smart enough to identify what’s ahead with surprising speed and accuracy; it’s been taught a street sign is not a tree or a small child riding a skateboard. It’s chilling to think about. We’re riding in a BMW station wagon that knows almost as much about driving in a city as its two occupants.
Back to school

The prototype learned everything it knows from members of Bosch’s research and development department.

“We have an offline training process,” explained research engineer Dimitrios Bariamis in an interview with Digital Trends. “We give the car images that we annotate, so we say ‘in this part of the image there is a pedestrian, this part of the image is a street,’ and so on. Then we get that into the car, and we give it the image from the camera which is processed according to the parameters that have been previously learned. The system knows that this part of the image is a street because it looks like the street it saw during the training process,” he adds.

Bariamis and his team have fed the system thousands of screen shots from on-board video footage taken in German cities like Munich, Frankfurt, and Stuttgart. They also sourced images from Daimler’s Cityscapes Dataset, which breaks the world down into the exact same 19 categories. These annotated images help the car learn as it moves along, even if it’s traveling in a town it’s never been to before. “Artificial intelligence generalizes the unknown,” Bariamis tells us.

The software classifies the world around it even in a heavy rain storm, but it hasn’t been tested in the snow yet. Bariamis is optimistic, and he doesn’t think snow will impair the car’s vision. Right now, the only limitations his team has identified are linked to what the car has and hasn’t seen, and hardware issues. For example, the system has never “seen” a highway yet, so it might not be able to identify a toll booth. It also goes without saying that the car loses its eyesight if something – e.g., the viscous contents of an avian digestive system – suddenly covers up the camera.

The project is the work of Bosch’s forward-thinking research and development arm. Where it goes next depends entirely on the company’s corporate arm and its clients.

Bariamis told us the technology can be integrated into relatively basic driver-assistance features like adaptive cruise control, state-of-the-art semi-autonomous software, and even a fully-autonomous car. Crucially, it can be modified for various uses. The software we experienced in Germany sees the world in 19 colors, but it’s possible to either add more categories when more detailed information is required, or delete a few of them if they’re not needed.

The Warhol-esque view of the world showcased by Bosch’s BMW-based prototype is what will make the advent of robot cars possible in the years to come. It’s an integral part of the technology package that will help the automotive industry transition from building cars to manufacturing intelligent cars.

Intel and Mobileye plan fleet of 100 Level 4 self-driving cars

_________________
www.lawyerscommitteefor9-11inquiry.org
www.rethink911.org
www.patriotsquestion911.com
www.actorsandartistsfor911truth.org
www.mediafor911truth.org
www.pilotsfor911truth.org
www.mp911truth.org
www.ae911truth.org
www.rl911truth.org
www.stj911.org
www.v911t.org
www.thisweek.org.uk
www.abolishwar.org.uk
www.elementary.org.uk
www.radio4all.net/index.php/contributor/2149
http://utangente.free.fr/2003/media2003.pdf
"The maintenance of secrets acts like a psychic poison which alienates the possessor from the community" Carl Jung
https://37.220.108.147/members/www.bilderberg.org/phpBB2/
Back to top
View user's profile Send private message Visit poster's website MSN Messenger
Whitehall_Bin_Men
Trustworthy Freedom Fighter
Trustworthy Freedom Fighter


Joined: 13 Jan 2007
Posts: 2236
Location: Westminster, LONDON, SW1A 2HB.

PostPosted: Sat Nov 25, 2017 5:58 pm    Post subject: Reply with quote

Self-driving cars programmed to decide who dies in a crash
https://www.usatoday.com/story/money/cars/2017/11/23/self-driving-cars -programmed-decide-who-dies-crash/891493001/
Todd Spangler | Detroit Free Press
Updated 20 hours ago
WASHINGTON — Consider this hypothetical:
It’s a bright, sunny day and you’re alone in your spanking new self-driving vehicle, sprinting along the two-lane Tunnel of Trees on M-119 high above Lake Michigan north of Harbor Springs. You’re sitting back, enjoying the view. You’re looking out through the trees, trying to get a glimpse of the crystal blue water below you, moving along at the 45-mile-an-hour speed limit.

A Waymo minivan outfitted with self-driving sensors brakes suddenly for a black car that has backed ...more
Waymo
As you approach a rise in the road, heading south, a school bus appears, driving north, one driven by a human, and it veers sharply toward you. There is no time to stop safely, and no time for you to take control of the car.
Does the car:
A. Swerve sharply into the trees, possibly killing you but possibly saving the bus and its occupants?
B. Perform a sharp evasive maneuver around the bus and into the oncoming lane, possibly saving you, but sending the bus and its driver swerving into the trees, killing her and some of the children on board?
C. Hit the bus, possibly killing you as well as the driver and kids on the bus?
In everyday driving, such no-win choices are may be exceedingly rare but, when they happen, what should a self-driving car — programmed in advance — do? Or in any situation — even a less dire one — where a moral snap judgment must be made?
It's not just a theoretical question anymore, with predictions that in a few years, tens of thousands of semi-autonomous vehicles may be on the roads. About $80 billion has been invested in the field. Tech companies are working feverishly on them, with Google-affiliated Waymo among those testing cars in Michigan, and mobility companies like Uber and Tesla racing to beat them. Automakers are placing a big bet on them. A testing facility to hurry along research is being built at Willow Run in Ypsilanti.
There's every reason for excitement: Self-driving vehicles will ease commutes, returning lost time to workers; enhance mobility for seniors and those with physical challenges, and sharply reduce the more than 35,000 deaths on U.S. highways each year.
But there are also a host of nagging questions to be sorted out as well, from what happens to cab drivers to whether such vehicles will create sprawl.
And there is an existential question:
Who dies when the car is forced into a no-win situation?
“There will be crashes,” said Van Lindberg, an attorney in the Dykema law firm's San Antonio office who specializes in autonomous vehicle issues. “Unusual things will happen. Trees will fall. Animals, kids will dart out.” Even as self-driving cars save thousands of lives, he said, “anyone who gets the short end of that stick is going to be pretty unhappy about it.”
Few people seem to be in a hurry to take on these questions, at least publicly.
It’s unaddressed, for example, in legislation moving through Congress that could result in tens of thousands of autonomous vehicles being put on the roads. In new guidance for automakers by the U.S. Department of Transportation, it is consigned to a footnote that says only that ethical considerations are "important" and links to a brief acknowledgement that "no consensus around acceptable ethical decision-making" has been reached.
Whether the technology in self-driving cars is superhuman or not, there is evidence that people are worried about the choices self-driving cars will be programmed to take.
Last year, for instance, a Daimler executive set off a wave of criticism when he was quoted as saying its autonomous vehicles would prioritize the lives of its passengers over anyone outside the car. The company later insisted he’d been misquoted, since it would be illegal “to make a decision in favor of one person and against another.”
Last month, Sebastian Thrun, who founded Google’s self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that “If it happens where there is a situation where a car couldn’t escape, it’ll go for the smaller thing.”
But what if the smaller thing is a child?
How that question gets answered may be important to the development and acceptance of self-driving cars.
Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”
Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. Already, the American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.
“These ethical problems are not just theoretical,” said Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, who has worked with Ford, Tesla and other autonomous vehicle makers on just such issues.
While he can’t talk about specific discussions, Lin says some automakers “simply deny that ethics is a real problem, without realizing that they’re making ethical judgment calls all the time” in their development, determining what objects the car will "see," how it will predict what those objects will do next and what the car's reaction should be.
Does the computer always follow the law? Does it slow down whenever it "sees" a child? Is it programmed to generate a random "human" response? Do you make millions of computer simulations, simply telling the car to avoid killing anyone, ever, and program that in? Is that even an option?
“You can see what a thorny mess it becomes pretty quickly,” said Lindberg. “Who bears that responsibility? … There are half a dozen ways you could answer that question leading to different outcomes.”
THE TROLLEY PROBLEM
Automakers and suppliers largely downplay the risks of what in philosophical circles is known as “the trolley problem” — named for a no-win hypothetical situation in which, in the original format, a person witnessing a runaway trolley could allow it to hit several people or, by pulling a lever, divert it, killing someone else.
In the circumstance of the self-driving car, it’s often boiled down to a hypothetical vehicle hurtling toward a crowded crosswalk with malfunctioning brakes: A certain number of occupants will die if the car swerves; a number of pedestrians will die if it continues. The car must be programmed to do one or the other.
Philosophical considerations, aside, automakers argue it’s all but bunk — it’s so contrived.
“I don't remember when I took my driver’s license test that this was one of the questions,” said Manuela Papadopol, director of business development and communications for Elektrobit, a leading automotive software maker and a subsidiary of German auto supplier Continental AG.
If anything, self-driving cars could almost eliminate such an occurrence. They will sense such a problem long before it would become apparent to a human driver and slow down or stop. Redundancies — for brakes, for sensors — will detect danger and react more appropriately.
“The cars will be smart — I don’t think there's a problem there. There are just solutions," Papadopol said.
Alan Hall, Ford's spokesman for autonomous vehicles, described the self-driving car’s capabilities — being able to detect objects with 360-degree sensory data in daylight or at night — as “superhuman.”
“The car sees you and is preparing different scenarios for how to respond,” he said.
Lin said that, in general, many self-driving automakers believe the simple act of braking, of slowing to a stop, solves the trolley problem. But it doesn't, such as in a theoretical case where you're being tailgated by a speeding fuel tanker.
SHOULD GOVERNMENT DECIDE?
Some experts and analysts believe solving the trolley problem could be a simple matter of regulators or legislators deciding in advance what actions a self-driving car should take in a no-win situation. But others doubt that any set of rules can capture and adequately react to every such scenario.
The question doesn’t need to be as dramatic as asking who dies in a crash either. It could be as simple as deciding what to do about jaywalkers or where a car places itself in a lane next to a large vehicle to make its passengers feel secure or whether to run over a squirrel that darts into a road.
Chris Gerdes, who as director of the Center for Automotive Research at Stanford University has been working with Ford, Daimler and others on the issue, said the question is ultimately not about deciding who dies. It's about how to keep no-win situations from happening in the first place and, when they do occur, setting up a system for deciding who is responsible.
Share MediaShow Caption

For instance, he noted California law requires vehicles to yield the crosswalk to pedestrians but also says pedestrians have a duty not to suddenly enter a crosswalk against the light. Michigan and many other states have similar statutes.
Presumably, then, there could be a circumstance in which the responsibility for someone darting into the path of an autonomous vehicle at the last minute rests with that person — just as it does under California law.
But that “forks off into some really interesting questions," Gerdes said, such as whether the vehicle could potentially be programmed to react differently, say, for a child. "Shouldn’t we treat everyone the same way?” he asked. "Ultimately, it’s a societal decision,” meaning it may have to be settled by legislators, courts and regulators.
That could result in a patchwork of conflicting rules and regulations across the U.S.

“States would continue to have that ability to regulate how they operate on the road,” said U.S. Sen. Gary Peters, D-Mich., one of the authors of federal legislation under consideration that would allow for tens of thousands of autonomous vehicles to be tested on U.S. highways in theyears to come. He says that while design and safety standards will rest with federal regulators, states will continue to impose traffic rules.
Peters acknowledged that it would be “an impossible standard” to eliminate all crashes. But he argued that people need to remember that autonomous vehicles will save tens of thousands of lives a year. In 2015, the consulting firm McKinsey & Co. said research indicated self-driving cars could reduce traffic fatalities by 90% once fully deployed. More than 37,000 people died in U.S. roads in 2016 -- the vast majority because of human error.
But researchers, automakers, academics and others understand something else about self-driving cars and the risks they may still pose, namely, that for all their promise to reduce accidents, they can't eliminate them.
“It comes back to whether you want to find ways to program in specifics or program in desired outcomes,” said Gerdes. “At the end of the day, you’re still required to come up with what you want the desired outcomes to be and the desired outcome cannot be to avoid any accidents all the time.
“It becomes a little uncomfortable sometimes to look at that."
THE HARD QUESTIONS
While some people in the industry, like Tesla’s Elon Musk, believe fully autonomous vehicles could be on U.S. roads within a few years, others say it could be a decade or more — and even longer before the full promise of self-driving cars and trucks is realized.
The trolley problem is just one that has to be cracked before then.
There are others, like those faced by Daryn Nakhuda, CEO of Mighty AI, which is in the business of breaking down into data for self-driving cars all the objects they are going to need to “see” in order to predict and react. A bird flying at the window. A thrown ball. A mail truck parked so there is not enough space in the car’s lane to pass without crossing the center line.
Automakers will have to decide what the car “sees” and what it doesn’t. Seeing everything around it — and processing it — could be a waste of limited processing power. Which means another set of ethical and moral questions.
Then there is the question of how self-driving cars could be taught to learn and respond to the tasks they are given — the stuff of science fiction that seems about to come true.
While self-driving cars can be programmed — told what to do when that school bus comes hurtling toward them —- there are other options. Through millions of computer simulations and data from real self-driving cars being tested, the cars themselves can begin to learn the "best" way to respond to a given situation.
For example, Waymo — Google's self-driving car arm — in a recent government filing said through trial and error in simulations, it's teaching its cars how to navigate a tricky left turn against a flashing yellow arrow at a real intersection in Mesa, Ariz. The simulations — not the programmers — determine when it's best to inch into the intersection and when it's best to accelerate through it. And the cars learn how to mimic real driving.
More: Driverless cars can transform lives — if we change the rules and let them
More: Your new self-driving car will be pioneered by a farmer
More: Google and AutoNation partner on self-driving car program
Ultimately, through such testing, the cars themselves could potentially learn how best to get from Point A to Point B, just by having programmed them to discern what "best" means — say the fastest, safest, most direct route. Through simulation and data shared with real world conditions, the cars would "learn" and execute the request.
Here's where the science fiction comes in, however.
PLAYING 'GO'
A computer programmed to “learn” how to play the ancient Chinese game of Go by just such a means is not only now beating grandmasters for the first time in history — and long after computers were beating grandmasters in chess — it is making moves that seem counterintuitive and inexplicable to expert human players.
What might that look like with cars?
At the American Center for Mobility in Ypsilanti, Mich., where a testing ground is being completed for self-driving cars, CEO John Maddox said vehicles will be able to put to the test what he calls “edge” cases that vehicles will have to deal with regularly —such as not confusing the darkness of a tunnel with a wall or accurately predicting whether a person is about to step off a curb or not.
The facility will also play a role, through that testing, of getting the public used to the idea of what self-driving cars can do, how they will operate, how they can be far safer than vehicles operated by humans, even if some questions remain about their functioning.
“Education is critical,” Maddox said. “We have to be able to demonstration and illustrate how AVs work and how they don’t work.”
As for the trolley problem, most automakers and experts expect some sort of standard to emerge — even if it's not entirely clear what it will be.
At SAE International — what was known as the Society of Automotive Engineers, a global standard-making group — Chief Product Officer Frank Menchaca said reaching a perfect standard is a daunting, if not impossible, task, with so many fluid factors involved in any accident: Speed. Situation. Weather conditions. Mechanical performance.
Even with that standard, there may be no good answer to the question of who dies in a no-win situation, he said. Especially if it's to be judged by a human.
“As human beings, we have hundreds of thousands of years of moral, ethical, religious and social behaviors programmed inside of us,” he added. “It’s very hard to replicate that.”
text
facebook

_________________
--
'Suppression of truth, human spirit and the holy chord of justice never works long-term. Something the suppressors never get.' David Southwell
http://aangirfan.blogspot.com
http://aanirfan.blogspot.com
Martin Van Creveld: Let me quote General Moshe Dayan: "Israel must be like a mad dog, too dangerous to bother."
Martin Van Creveld: I'll quote Henry Kissinger: "In campaigns like this the antiterror forces lose, because they don't win, and the rebels win by not losing."
Back to top
View user's profile Send private message Visit poster's website
TonyGosling
Editor
Editor


Joined: 25 Jul 2005
Posts: 15983
Location: St. Pauls, Bristol, England

PostPosted: Sat Jan 27, 2018 11:17 am    Post subject: Reply with quote

Echoes of my
Assassin’s guide to Western ‘democracy’
So ridiculed by The Daily Beast and Washington Post in 2015

This dovetails nicely with his belief that Western intelligence agencies have assassinated pretty much everyone of note in the past half-century—former Swedish Prime Minister Olof Palme, Congolese independence leader Patrice Lumumba, Princess Diana, Dr. David Kelly (the British weapons expert), UK politician Robin Cook, John Smith (Tony Blair’s predecessor as leader of the Labour party), Yasser Arafat, Slobodan Milosevic, Hugo Chavez, Jimi Hendrix, Jim Morrison, Bob Marley, John Lennon, and Michael Jackson. All were killed by “forces lurking in the unaccountable grey areas of the NATO countries’ military intelligence services.”

Confessions of an American Illuminati
RT has an Illuminati correspondent, so I guess the jig is up on the great American conspiracy that secretly runs the world.


The secrets of Israel’s assassination operations
Middle East
Jan. 25, 2018 | 12:08 AM
http://www.dailystar.com.lb/News/Middle-East/2018/Jan-25/435307-the-se crets-of-israels-assassination-operations.ashx#.WmttVgfkQhY.mailto
The secrets of Israel’s assassination operations

Ethan Bronner| Bloomberg

Poisoned toothpaste that takes a month to end its target’s life. Armed drones. Exploding cell phones. Spare tires with remote-control bombs. Assassinating enemy scientists and discovering the secret lovers of Islamic holy men. A new book chronicles these techniques and asserts that Israel has carried out at least 2,700 assassination operations in its 70 years of existence. While many failed, they add up to far more than any other Western country, the book says.

Ronen Bergman, the intelligence correspondent for Yediot Aharonot newspaper, persuaded many agents of Mossad, Shin Bet and the military to tell their stories, some using their real names. The result is the first comprehensive look at Israel’s use of state-sponsored killings.

Based on 1,000 interviews and thousands of documents, and running more than 600 pages, Rise and Kill First makes the case that Israel has used assassination in the place of war, killing half a dozen Iranian nuclear scientists, for instance, rather than launching a military attack. It also strongly suggests that Israel used radiation poisoning to kill Yasser Arafat, the longtime Palestinian leader, an act its officials have consistently denied.

Bergman writes that Arafat’s death in 2004 fits a pattern and had advocates. But he steps back from flatly asserting what happened, saying that Israeli military censorship prevents him from revealing what – or if – he knows.

The book’s title, Rise and Kill First, comes from the ancient Jewish Talmud admonition, “If someone comes to kill you, rise up and kill him first.” Bergman says a huge percentage of the people he interviewed cited that passage as justification for their work. So does an opinion by the military’s lawyer declaring such operations to be legitimate acts of war.

Despite the many interviews, including with former prime ministers Ehud Barak and Ehud Olmert, Bergman, the author of several books, says the Israeli secret services sought to interfere with his work, holding a meeting in 2010 on how to disrupt his research and warning former Mossad employees not to speak with him.

He says that while the U.S. has tighter constraints on its agents than does Israel, President George W. Bush adopted many Israeli techniques after the terrorist attacks of Sept. 11, 2001, and President Barack Obama launched several hundred targeted killings.

“The command-and-control systems, the war rooms, the methods of information gathering and the technology of the pilotless aircraft, or drones, that now serve the Americans and their allies were all in large part developed in Israel,” Bergman writes.

The book gives a textured history of the personalities and tactics of the various secret services. In the 1970s, a new head of operations for Mossad opened hundreds of commercial companies overseas with the idea that they might be useful one day. For example, Mossad created a Middle Eastern shipping business that, years later, came in handy in providing cover for a team in the waters off Yemen.

There have been plenty of failures. After a Palestinian armed group killed Israeli athletes at the 1972 Munich Olympics, Israel sent agents to kill the perpetrators – and shot more than one misidentified man. There were also successful operations that did more harm than good to Israel’s policy goals, Bergman notes.

Bergman raises moral and legal concerns provoked by state-sponsored killing, including the existence of separate legal systems for secret agents and the rest of Israel. But he presents the operations, for the most part, as achieving their aims. While many credit the barrier Israel built along and inside the West Bank with stopping assaults on Israeli citizens in the early 2000s, he argues that what made the difference was “a massive number of targeted killings of [enemy] operatives.”

One of Bergman’s most important sources was Meir Dagan, a recent head of Mossad for eight years who died in early 2016. Toward the end of his career, Dagan fell out with Prime Minister Benjamin Netanyahu partly over launching a military attack on Iran. Netanyahu said intelligence techniques such as selling the country faulty parts for its reactors – which Israel and the U.S. were doing – weren’t enough.

Dagan argued that these techniques, especially assassinations, would do the job. As Bergman quotes him saying, “In a car, there are 25,000 parts on average. Imagine if 100 of them are missing. It would be very hard to make it go. On the other hand, sometimes it’s most effective to kill the driver, and that’s that.”

A version of this article appeared in the print edition of The Daily Star on January 25, 2018, on page 9.

_________________
www.lawyerscommitteefor9-11inquiry.org
www.rethink911.org
www.patriotsquestion911.com
www.actorsandartistsfor911truth.org
www.mediafor911truth.org
www.pilotsfor911truth.org
www.mp911truth.org
www.ae911truth.org
www.rl911truth.org
www.stj911.org
www.v911t.org
www.thisweek.org.uk
www.abolishwar.org.uk
www.elementary.org.uk
www.radio4all.net/index.php/contributor/2149
http://utangente.free.fr/2003/media2003.pdf
"The maintenance of secrets acts like a psychic poison which alienates the possessor from the community" Carl Jung
https://37.220.108.147/members/www.bilderberg.org/phpBB2/
Back to top
View user's profile Send private message Visit poster's website MSN Messenger
Whitehall_Bin_Men
Trustworthy Freedom Fighter
Trustworthy Freedom Fighter


Joined: 13 Jan 2007
Posts: 2236
Location: Westminster, LONDON, SW1A 2HB.

PostPosted: Tue Mar 13, 2018 8:24 pm    Post subject: Reply with quote

Here's how intelligence agencies and their organised crime chums get into a hundred million target cars!
good wired article

A NEW WIRELESS HACK CAN UNLOCK 100 MILLION VOLKSWAGENS
08.10.16
TIME OF PUBLICATION: 4:29 PM.
4:29 PM

JAPAN-GERMANY-AUTO-TOYOTA-VOLKSWAGEN
KAZUHIRO NOGI/AFP/GETTY IMAGES
IN 2013, WHEN University of Birmingham computer scientist Flavio Garcia and a team of researchers were preparing to reveal a vulnerability that allowed them to start the ignition of millions of Volkswagen cars and drive them off without a key, they were hit with a lawsuit that delayed the publication of their research for two years. But that experience doesn’t seem to have deterred Garcia and his colleagues from probing more of VW’s flaws: Now, a year after that hack was finally publicized, Garcia and a new team of researchers are back with another paper that shows how Volkswagen left not only its ignition vulnerable but the keyless entry system that unlocks the vehicle’s doors, too. And this time, they say, the flaw applies to practically every car Volkswagen has sold since 1995.

Later this week at the Usenix security conference in Austin, a team of researchers from the University of Birmingham and the German engineering firm Kasper & Oswald plan to reveal two distinct vulnerabilities they say affect the keyless entry systems of an estimated nearly 100 million cars. One of the attacks would allow resourceful thieves to wirelessly unlock practically every vehicle the Volkswagen group has sold for the last two decades, including makes like Audi and Škoda. The second attack affects millions more vehicles, including Alfa Romeo, Citroen, Fiat, Ford, Mitsubishi, Nissan, Opel, and Peugeot.

The $40 Arduino radio device the researchers used to intercept codes from vehicles' key fobs.
The $40 Arduino radio device the researchers used to intercept codes from vehicles’ key fobs.
Both attacks use a cheap, easily available piece of radio hardware to intercept signals from a victim’s key fob, then employ those signals to clone the key. The attacks, the researchers say, can be performed with a software defined radio connected to a laptop, or in a cheaper and stealthier package, an Arduino board with an attached radio receiver that can be purchased for $40. “The cost of the hardware is small, and the design is trivial,” says Garcia. “You can really build something that functions exactly like the original remote.”

100 Million Vehicles, 4 Secret Keys
Of the two attacks, the one that affects Volkswagen is arguably more troubling, if only because it offers drivers no warning at all that their security has been compromised, and requires intercepting only a single button press. The researchers found that with some “tedious reverse engineering” of one component inside a Volkswagen’s internal network, they were able to extract a single cryptographic key value shared among millions of Volkswagen vehicles. By then using their radio hardware to intercept another value that’s unique to the target vehicle and included in the signal sent every time a driver presses the key fob’s buttons, they can combine the two supposedly secret numbers to clone the key fob and access to the car. “You only need to eavesdrop once,” says Birmingham researcher David Oswald. “From that point on you can make a clone of the original remote control that locks and unlocks a vehicle as many times as you want.”

The attack isn’t exactly simple to pull off: Radio eavesdropping, the researchers say, requires that the thief’s interception equipment be located within about 300 feet of the target vehicle. And while the shared key that’s also necessary for the theft can be extracted from one of a Volkswagen’s internal components, that shared key value isn’t quite universal; there are several different keys for different years and models of Volkswagen vehicles, and they’re stored in different internal components.

The researchers aren’t revealing which components they extracted the keys from to avoid tipping off potential car hackers. But they warn that if sophisticated reverse engineers are able to find and publicize those shared keys, each one could leave tens of millions of vehicles vulnerable. Just the four most common ones are used in close to all the 100 million Volkswagen vehicles sold in the past twenty years. They say that only the most recent VW Golf 7 model and others that share its locking system have been designed to use unique keys and are thus immune to the attack.

Cracked in 60 Seconds
The second technique that the researchers plan to reveal at Usenix attacks a cryptographic scheme called HiTag2, which is decades old but still used in millions of vehicles. For that attack they didn’t need to extract any keys from a car’s internal components. Instead, a hacker would have to use a radio setup similar to the one used in the Volkswagen hack to intercept eight of the codes from the driver’s key fob, which in modern vehicles includes one rolling code number that changes unpredictably with every button press. (To speed up the process, they suggest that their radio equipment could be programmed to jam the driver’s key fob repeatedly, so that he or she would repeatedly press the button, allowing the attacker to quickly record multiple codes.)

With that collection of rolling codes as a starting point, the researchers found that flaws in the HiTag2 scheme would allow them to break the code in as little as one minute. “No good cryptographer today would propose such a scheme,” Garcia says.

Volkswagen didn’t immediately respond to WIRED’s request for comment, but the researchers write in their paper that VW acknowledged the vulnerabilities they found. NXP, the semiconductor company that sells chips using the vulnerable HiTag2 crypto system to carmakers, says that it’s been recommending customers upgrade to newer schemes for years. “[HiTag2] is a legacy security algorithm, introduced 18 years ago,” writes NXP spokesperson Joon Knapen. “Since 2009 it has been gradually replaced by more advanced algorithms. Our customers are aware, as NXP has been recommending not to use HT2 for new projects and design-ins for years.”

While the researchers’ two attacks both focus on merely unlocking cars rather than stealing them, Garcia points out that they might be combined with techniques like the one he and different teams revealed at the Usenix conferences in 2012 and last year. That research exposed vulnerabilities in the HiTag2 and Megamos “immobilizer” systems that prevent cars from being driven without a key, and would allow millions of Volkswagens and other vehicles ranging from Audis to Cadillacs to Porsches to be driven by thieves, provided they could get access to the inside of the vehicle.

Black Boxes and Mysterious Thefts
Plenty of evidence suggests that sort of digitally enabled car theft is already occurring. Police have been stumped by videos of cars being stolen with little more than a mystery electronic device. In one case earlier this month thieves in Texas stole more than 30 Jeeps using a laptop, seemingly connected to the vehicle’s internal network via a port on its dashboard. “I’ve personally received inquiries from police officers,” says Garcia, who added they had footage of thieves using a “black box” to break into cars and drive them away. “This was partly our motivation to look into it.”

For car companies, a fix for the problem they’ve uncovered won’t be easy, Garcia and Oswald contend. “These vehicles have a very slow software development cycle,” says Garcia. “They’re not able to respond very quickly with new designs.”

Until then, they suggest that car owners with affected vehicles—the full list is included in the researchers’ paper (see below)—simply avoid leaving any valuables in their car. “A vehicle is not a safebox,” says Oswald. Careful drivers, they add, should even consider giving up on their wireless key fobs altogether and instead open and lock their car doors the old-fashioned, mechanical way.

But really, they point out, their research should signal to automakers that all of their systems need more security scrutiny, lest the same sort of vulnerabilities apply to more critical driving systems. “It’s a bit worrying to see security techniques from the 1990s used in new vehicles,” says Garcia. “If we want to have secure, autonomous, interconnected vehicles, that has to change.”

Here’s the researchers’ full paper:


JACK STEWART
Elon Musk Proves Model 3 Production Is Way Harder Than Rocket Science.

_________________
--
'Suppression of truth, human spirit and the holy chord of justice never works long-term. Something the suppressors never get.' David Southwell
http://aangirfan.blogspot.com
http://aanirfan.blogspot.com
Martin Van Creveld: Let me quote General Moshe Dayan: "Israel must be like a mad dog, too dangerous to bother."
Martin Van Creveld: I'll quote Henry Kissinger: "In campaigns like this the antiterror forces lose, because they don't win, and the rebels win by not losing."
Back to top
View user's profile Send private message Visit poster's website
Whitehall_Bin_Men
Trustworthy Freedom Fighter
Trustworthy Freedom Fighter


Joined: 13 Jan 2007
Posts: 2236
Location: Westminster, LONDON, SW1A 2HB.

PostPosted: Tue Mar 20, 2018 9:02 am    Post subject: Reply with quote

Entirely predictable collateral damage in the transhumanist agenda

Uber suspends self-driving tests after car kills woman in Arizona
http://www.presstv.com/Detail/2018/03/20/556033/US-Arizona-Uberselfdri ving-car-woman-dies

Tue Mar 20, 2018 06:18AM

In this file photo taken on September 13, 2016, pilot models of the Uber self-driving car are displayed at the Uber Advanced Technologies Center in Pittsburgh, Pennsylvania. (Photo by AFP)

In this file photo taken on September 13, 2016, pilot models of the Uber self-driving car are displayed at the Uber Advanced Technologies Center in Pittsburgh, Pennsylvania.

A woman died of her injuries after being struck by a Uber self-driving vehicle in Arizona, police said on Monday, and the ride hailing company said it had suspended its autonomous vehicle program across the United States and Canada.

The accident in Tempe, Arizona, marked the first fatality from a self-driving vehicle, which are still being tested around the globe, and could derail efforts to fast-track the introduction of the new technology in the United States.

At the time of the accident, which occurred overnight Sunday to Monday, the car was in autonomous mode with a vehicle operator behind the wheel, Tempe police said.

A spokesman for Uber Technologies Inc said the company was suspending its North American tests. In a tweet, Uber expressed its condolences and said the company was fully cooperating with authorities.

(Source: AP)

Whitehall_Bin_Men wrote:
http://uk.businessinsider.com/uber-self-driving-car-fails-2016-12

I was behind the wheel when a self-driving Uber failed — here's what happens
Danielle Muoio Dec. 24, 2016, 4:37 PM 10,937
uber driverless car
Business Insider/Corey Protin

Uber launched its second pilot program in San Francisco last week, but the day it launched, a car ran straight through a red light.

Uber has since said the incident was due to human error, but it's not clear whether that means a person drove through the light or failed to stop the car from doing so while it was in autonomous mode. Either way, Uber knows its cars will fail from time-to-time, which is why a safety driver and engineer sit upfront while the cars autonomously drive people.

(Uber shut down the San Francisco pilot program on Wednesday after the California DMV revoked the cars' registration.)

Uber let us get behind the wheel for the launch of its pilot program in Pittsburgh in September, and we got to see firsthand what it's like when the car fails and needs a driver to take over.

Keep in mind that Uber used self-driving Volvo XC90s for the San Francisco pilot instead of the self-driving Ford Fusions in Pittsburgh. As a result, the interface we experienced is slightly different from the one in the Volvo cars.

But you can scroll down to get a basic sense of what it's like when the robot cars need help:

View As: One Page Slides
First, a brief introduction to Uber's self-driving car in Pittsburgh: a Ford Fusion retrofitted with autonomous tech. The car has a massive, spinning lidar on top and 20 cameras. That doesn't even factor in the several radar and lidar modules on the side and GPS units helping the car drive safely.
First, a brief introduction to Uber's self-driving car in Pittsburgh: a Ford Fusion retrofitted with autonomous tech. The car has a massive, spinning lidar on top and 20 cameras. That doesn't even factor in the several radar and lidar modules on the side and GPS units helping the car drive safely.
Uber
Lidar is an acronym for light-sensing radar, a remote-sensing technology that uses lasers to map out the world around the car so it can "see" obstacles.

That lidar on top is exceptionally powerful. Eric Meyhofer, the engineering lead for the self-driving-car project, says it's capable of firing 1.4 million laser points per second to build a 3D view of the car's surroundings. A camera under the giant lidar machinery transforms that black-and-white 3D view into color so it can sense things like traffic-light changes.
That lidar on top is exceptionally powerful. Eric Meyhofer, the engineering lead for the self-driving-car project, says it's capable of firing 1.4 million laser points per second to build a 3D view of the car's surroundings. A camera under the giant lidar machinery transforms that black-and-white 3D view into color so it can sense things like traffic-light changes.
Uber
But that doesn't mean the car is ready to go out in the world all on its own. We've already heard that Uber's self-driving cars struggle with bridges because there aren't enough environmental cues for the car to figure out where it is.
But that doesn't mean the car is ready to go out in the world all on its own. We've already heard that Uber's self-driving cars struggle with bridges because there aren't enough environmental cues for the car to figure out where it is.
Business Insider/Corey Protin
You can read a bit more about that problem here.

And Uber itself has said it chose Pittsburgh because it poses so many challenges for its self-driving cars. "We have a very old city, very complex road network, real traffic problems here, [and] extreme weather here," said Raffi Krikorian, the director of Uber's Advanced Technologies Center. "So, in a lot of ways, Pittsburgh is the double black diamond of driving."
And Uber itself has said it chose Pittsburgh because it poses so many challenges for its self-driving cars.
Business Insider/Corey Protin
That's why the lucky few who are able to hail a self-driving Uber vehicle will still see a driver behind the wheel and an engineer in the passenger seat. Uber is aware there are situations in which a human may need to take over, and it has prepped accordingly.
That's why the lucky few who are able to hail a self-driving Uber vehicle will still see a driver behind the wheel and an engineer in the passenger seat. Uber is aware there are situations in which a human may need to take over, and it has prepped accordingly.
Business Insider/Corey Protin
Uber let me get behind the wheel, and the whole system was really easy to use. A button on the center console will kick the car into autonomous mode. Right next to it is a giant red "kill switch" that, when hit, lets you take control of the car again.
Uber let me get behind the wheel, and the whole system was really easy to use. A button on the center console will kick the car into autonomous mode. Right next to it is a giant red
Uber; Business Insider/Danielle Muoio
The self-driving Volvo XC90s have this same set-up.

The kill switch isn't entirely necessary, however, and Uber drivers are told it's better to take over by pressing the brake or accelerator or turning the wheel yourself. A tool bar behind the wheel will show whether the car is in manual mode, as shown by a blue circle, or if it is driving autonomously, as indicated by a green checkmark.
The kill switch isn't entirely necessary, however, and Uber drivers are told it's better to take over by pressing the brake or accelerator or turning the wheel yourself. A tool bar behind the wheel will show whether the car is in manual mode, as shown by a blue circle, or if it is driving autonomously, as indicated by a green checkmark.
Business Insider/Danielle Muoio
Being in the driver's seat was fairly nerve-wracking at first. There's something difficult about not being in control, and seeing a wheel move on its own is downright spooky when you're not used to it.
Being in the driver's seat was fairly nerve-wracking at first. There's something difficult about not being in control, and seeing a wheel move on its own is downright spooky when you're not used to it.
Business Insider/Corey Protin
But the car did perform relatively well in Pittsburgh! It accelerated just the right amount up steep hills and always had a smooth brake when approaching stopped cars. It also arguably handled left turns better than I do.
But the car did perform relatively well in Pittsburgh! It accelerated just the right amount up steep hills and always had a smooth brake when approaching stopped cars. It also arguably handled left turns better than I do.
Business Insider/Corey Protin
Still, it did have some problems. I was driving on a perfectly straight back road, pictured below, without any cars when I heard a ding indicating the car wasn't driving itself anymore. The engineer in the passenger seat wasn't sure why the car stopped driving.
Still, it did have some problems. I was driving on a perfectly straight back road, pictured below, without any cars when I heard a ding indicating the car wasn't driving itself anymore. The engineer in the passenger seat wasn't sure why the car stopped driving.
Business Insider/Corey Protin
When the car goes back into manual mode, it doesn't automatically stall but begins to slow down. That means you have to be aware the entire time you're behind the wheel in case you're sharing the road with other vehicles.
When the car goes back into manual mode, it doesn't automatically stall but begins to slow down. That means you have to be aware the entire time you're behind the wheel in case you're sharing the road with other vehicles.
Business Insider/Corey Protin
When I was riding in the backseat, the car switched into manual mode on a busy bridge. Our driver had his hands on the wheel the entire time and took over so quickly you wouldn't have known anything had happened had a noise not sounded. We were told the failure had nothing to do with being on a bridge but with how busy our surroundings were.
When I was riding in the backseat, the car switched into manual mode on a busy bridge. Our driver had his hands on the wheel the entire time and took over so quickly you wouldn't have known anything had happened had a noise not sounded. We were told the failure had nothing to do with being on a bridge but with how busy our surroundings were.
Business Insider/Corey Protin
There are also situations in which drivers are advised to take over, even if the car doesn't switch to manual mode. When I was behind the wheel and approached a car pulled over with its hazard lights on, I was instructed to handle the maneuvering to be on the safe side.
There are also situations in which drivers are advised to take over, even if the car doesn't switch to manual mode. When I was behind the wheel and approached a car pulled over with its hazard lights on, I was instructed to handle the maneuvering to be on the safe side.
Business Insider/Corey Protin
Overall, it's really easy to take back control when the car suddenly switches into manual mode. Considering how many times it happened on just my 5-mile ride, it's obvious that Uber's self-driving cars still have some kinks to work out.
Overall, it's really easy to take back control when the car suddenly switches into manual mode. Considering how many times it happened on just my 5-mile ride, it's obvious that Uber's self-driving cars still have some kinks to work out.
Business Insider/Danielle Muoio
Share This Post

_________________
--
'Suppression of truth, human spirit and the holy chord of justice never works long-term. Something the suppressors never get.' David Southwell
http://aangirfan.blogspot.com
http://aanirfan.blogspot.com
Martin Van Creveld: Let me quote General Moshe Dayan: "Israel must be like a mad dog, too dangerous to bother."
Martin Van Creveld: I'll quote Henry Kissinger: "In campaigns like this the antiterror forces lose, because they don't win, and the rebels win by not losing."
Back to top
View user's profile Send private message Visit poster's website
Display posts from previous:   
Post new topic   Reply to topic    9/11, 7/7 & the War on Freedom Forum Index -> Unexplained Deaths, 'Suicidings', 'Accidents', Plots & Assassinations All times are GMT
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group