SearchRecent comments
Democracy LinksMember's Off-site Blogs |
sponsors of disinformation blitzkrieg.......A team of researchers at the University of Adelaide have found that as many as 80 percent of tweets about the 2022 Russia-Ukraine invasion in its early weeks were part of a covert propaganda campaign originating from automated fake “bot” accounts. An anti-Russia propaganda campaign originating from a “bot army” of phony automated Twitter accounts flooded the internet at the start of the war.
By Peter Cronau
The research shows that of the more than 5 million tweets studied, 90.2 percent (both bot and non-bot) came from accounts that were pro-Ukraine, with fewer than 7 percent of the accounts being classed as pro-Russian. The university researchers also found these automated tweets had been purposely used to drive up fear amongst people targeted by them, boosting a high level of statistically measurable “angst” in the online discourse. The research team analysed a massively unprecedented 5,203,746 tweets, sent with key hashtags, in the first two weeks of the Russian invasion of Ukraine from Feb. 24. The researchers looked at predominately English-language accounts. A calculated 1.8 million unique Twitter accounts in the dataset posted at least one English-language tweet. The results were published in August in a research paper, titled “#IStandWithPutin versus #IStandWithUkraine: The interaction of bots and humans in discussion of the Russia/Ukraine war,” by the University of Adelaide’s School of Mathematical Science. The size of the sample under study, of over 5-million tweets, dwarfs other recent studies of covert propaganda in social media surrounding the Ukraine war. The little-reported Stanford University/Graphika research on Western disinformation, analysed by Declassified Australia in September, examined just under 300,000 tweets from 146 Twitter accounts. The Meta/Facebook research on Russian disinformation reported widely by mainstream media, including by the Australian Broadcasting Corporation (ABC) a fortnight later, looked at only 1,600 Facebook accounts. Reports on the new research have appeared in only a few independent media sites, and on Russia’s RT. The ground-breaking study exposing a massive anti-Russia social media disinformation campaign has been effectively ignored by Western establishment media, showing how stories that don’t fit the desired pro-Western narrative are routinely buried.
Disinformation Blitz Krieg The Adelaide University researchers unearthed a massive organised pro-Ukraine influence operation underway from the early stages of the conflict. Overall, the study found automated “bot” accounts to be the source of between 60 to 80 percent of all tweets in the dataset. The published data shows that in the first week of the Ukraine-Russia war there was a huge mass of pro-Ukrainian hashtag bot activity. Approximately 3.5 million tweets using the hashtag #IStandWithUkraine were sent by bots in that first week. In fact, it was like someone had flicked a switch at the start of the war as pro-Ukraine bot activity suddenly burst into life. In that first day of the war the #IStandWithUkraine hashtag was used in as many as 38,000 tweets each hour, rising to 50,000 tweets an hour by day three of the war. By comparison, the data shows that in the first week there was an almost total absence of pro-Russian bot activity using the key hashtags. During that first week of the invasion, pro-Russian bots were sending off tweets using the #IStandWithPutin or #IStandWithRussia hashtags at a rate of only several hundred per hour. Given the apparent long-range planning for the invasion of Ukraine, cyber experts expressed surprise that Russian cyber and internet responses were so laggard. A researcher at the Centre for Security Studies in Switzerland, said: “The [pro-Russian] cyber operations we have seen do not show long preparation, and instead look rather haphazard.” After being apparently left flatfooted, the #IStandWithPutin hashtag mainly from automated bots, eventually fired up a week after the start of the war. That hashtag started appearing in higher numbers on March 2, day 7 of the war. It reached 10,000 tweets per hour just twice over the next two days, still way behind the pro-Ukraine tweeting activity. The #IStandWithRussia hashtag use was even smaller, reaching only 4,000 tweets per hour. After just two days of operation, the pro-Russian hashtag activity had dropped away almost completely. The study’s researchers noted the automated bot accounts “likely used by Russian authorities,” were “removed likely by pro-Ukrainian authorities.” The reaction against these pro-Russian accounts had been swift. On March 5, after the #IStandWithPutin hashtag had trended on Twitter, the company announced it had banned over 100 accounts using the hashtag for violating its “platform manipulation and spam policy” and participating in “coordinated inauthentic behaviour.” Later that month, the Ukraine Security Service (SBU) reportedly raided five “bot farms”’ operating inside the country. The Russia-linked bot operators were reportedly operating through 100,000 fake social media accounts spreading disinformation that was “intended to inspire panic among Ukrainian masses.”
Unfiltered Research The landmark Adelaide University research differs from these earlier revelations in another most unique and spectacular way. While the Stanford-Graphika and Meta research was produced by researchers who have long-term deep ties to the U.S. national security state, the Adelaide University researchers are remarkably independent. The academic team is from the university’s School of Mathematical Science. Using mathematical calculations, they set out to predict and model people’s psychological traits based on their digital footprint. Unlike the datasets selected and provided for the Stanford/Graphika and the Meta research, the data the Adelaide University team accessed did not come from accounts that had been detected for breaching guidelines and shut down by Meta or Twitter. Joshua Watt is one of the lead researchers on the university team, and is a Master of Philosophy candidate in applied mathematics. He told Declassified Australia that the dataset of 5 million tweets was accessed directly by the team from Twitter accounts on the internet using an academic license giving access to the Twitter API. The “Application Programming Interface” is a data communication software tool that allows researchers to directly retrieve and analyse Twitter data. The fake tweets and automated bot accounts had not been detected and removed by Twitter before being analysed by the researchers, although some were possibly removed in Twitter’s March sweep. Watt told Declassified Australia that in fact many of the bot accounts behind the 5 million tweets studied are likely to be still up and running. Declassified Australia contacted Twitter to ask what action they may have taken to remove the fake bot accounts identified in the University of Adelaide research. They had not responded by the time of going to press.
Critical Tool in Info War This new research paper confirms mounting fears that social media has covertly become what the researchers call “a critical tool in information warfare playing a large role in the Russian invasion of Ukraine.” The Adelaide University researchers tried their best to be noncommittal in describing the activities of the fake Twitter accounts, although they had found the vast majority – over 90 percent – were anti-Russian messages. They stated: “Both sides in the Ukrainian conflict use the online information environment to influence geopolitical dynamics and sway public opinion.” They found the two main participating sides in the propaganda war have their own particular goals and style. “Russian social media pushes narratives around their motivation, and Ukrainian social media aims to foster and maintain external support from Western countries, as well as promote their military efforts while undermining the perception of the Russian military.” While the research findings concentrated on automated Twitter bots, there were also findings on the use of hashtags by non-bot tweeters. They found significant information flows from non-bot pro-Russian accounts, but no significant flows from non-bot pro-Ukraine accounts. As well as being far more active, the pro-Ukraine side was found to be far more advanced in its use of automated bots. The pro-Ukrainian side used more “astroturf bots” than the pro-Russians. Astroturf bots are hyper-active political bots that continuously follow many other accounts to increase followers of that account.
Social Media Role in Boosting Fear Crucially, the University of Adelaide researchers also investigated the psychological influence the fake automated bot accounts had on the online conversation during those early weeks of the war. These conversations in a target audience may develop over time into support or opposition towards governments and policies – but they may also have more instant effects influencing the target audiences’ immediate decisions. The study found that it was the tweets from the fake “bot” accounts that most drove an increase in conversations surrounding “angst” amongst people targeted by them. They found these automated bot accounts increased “the use of words in the angst category which contains words related to fear and worry, such as ‘shame,’ ‘terrorist,’ ‘threat, ‘panic.’” By combining the “angst” messaging with messages about “motion” and geographical locations, the researchers found “the bot accounts are influencing more discussion surrounding moving/fleeing/going or staying.” The researchers believe this effect may well have been to influence Ukrainians even away from the conflict zones to flee from their homes. The research shows that fake automated social media “bot” accounts do manipulate public opinion by shaping the discourse, sometimes in very specific ways. The results provide a chilling indication of the very real malign effects that mass social media disinformation campaigns can have on an innocent civilian population.
Origins of Twitter Bot Accounts The researchers report that the overwhelming level of Twitter disinformation that was anti-Russian was from bots “likely [organised] by pro-Ukrainian authorities.” The researchers asserted no further findings about the origin of the 5 million tweets, but did find that some bots “are pushing campaigns specific to certain countries [unnamed], and hence sharing content aligned with those timezones.” The data does show that the peak time for a selection of pro-Ukrainian bot activity occurred between 6pm and 9pm across U.S. time zones. Some indication of the origin and the targeting of the messages could be deduced from the specific languages used in the 5 million tweets. Over 3.5 million tweets, or 67 percent, were in the English language, with fewer that 2 percent in Russian and Ukrainian. In May 2022, the National Security Agency (NSA) director and U.S. cyber command chief, General Paul Nakasone, revealed that the Cyber Command had been conducting offensive Information Operations in support of Ukraine. “We’ve conducted a series of operations across the full spectrum: offensive, defensive, [and] information operations,” Nakasone said. Nakasone said the U.S. has been conducting operations aimed at dismantling Russian propaganda. He said the operations were lawful, conducted through policy determined by the U.S. Defense Department and with civilian oversight. Nakasone said the U.S. seeks to tell the truth when conducting an information operation, unlike Russia. U.S. Cyber Command had deployed to Ukraine a “hunt forward” cyber team in December to help shore up Ukraine’s cyber defences and networks against active threats in anticipation of the invasion. A newly formed European Union cyber rapid response team consisting of 12 experts joined the Cyber Command team to look for active cyber threats inside Ukrainian networks and to strengthen the country’s cyber defences. The U.S. has invested $40 million since 2017 in helping Ukraine buttress its information technology sector. According to U.S. Deputy Secretary of State Wendy Sherman, the investments have helped Ukrainians “keep their internet on and information flowing, even in the midst of a brutal Russian invasion.”
Wars & Lies in Our Pockets With the rise of the internet, war and armed conflict will never be the same. Analysts have noted that the Russian invasion of Ukraine has ushered in a “new digital era of military, political and economic conflict” being manipulated by “laptop generals and bot armies.” “In all dimensions of this conflict, digital technology plays a key role – as a tool for cyberattacks and digital protest, and as an accelerator for flows of information and disinformation,” wrote analysts at the Heinrich Boll Stiftung in Brussels. “Propaganda has been a part of war since the beginning of history, but never before could it be so widely spread beyond an actual conflict area and targeted to so many different audiences.” Joshua Watt, one of the lead researchers on the University of Adelaide team that conducted the landmark study, summed it up: “In the past, wars have been primarily fought physically, with armies, air force and navy operations being the primary forms of combat. However, social media has created a new environment where public opinion can be manipulated at a very large scale.” “CNN brought once-distant wars into our living rooms,” another analyst stated, “but TikTok and YouTube and Twitter have put them in our pockets.” We are all carrying around with us a powerful source of information and news media – and also, most certainly, disinformation that’s coming relentlessly at us from influence operations run by “bad actors” whose aim is to deceive.
Peter Cronau is an award-winning investigative journalist, writer, and film-maker. His documentaries have appeared on ABC TV’s Four Corners and Radio National’s Background Briefing. He is an editor and cofounder of DECLASSIFIED AUSTRALIA. He is co-editor of the recent book A Secret Australia – Revealed by the WikiLeaks Exposés. This article is from Declassified Australia.
READ MORE: https://consortiumnews.com/2022/11/06/researchers-find-massive-anti-russian-bot-army/
FREE JULIAN ASSANGE NOW...........................
|
User login |
unit 8200....
BACK IN 2014, Glenn Greenwald:
One of the many pressing stories that remains to be told from the Snowden archive is how western intelligence agencies are attempting to manipulate and control online discourse with extreme tactics of deception and reputation-destruction. It’s time to tell a chunk of that story, complete with the relevant documents.
Over the last several weeks, I worked with NBC News to publish a series of articles about “dirty trick” tactics used by GCHQ’s previously secret unit, JTRIG (Joint Threat Research Intelligence Group). These were based on four classified GCHQ documents presented to the NSA and the other three partners in the English-speaking “Five Eyes” alliance. Today, we at the Intercept are publishing another new JTRIG document, in full, entitled “The Art of Deception: Training for Online Covert Operations.”
By publishing these stories one by one, our NBC reporting highlighted some of the key, discrete revelations: the monitoring of YouTube and Blogger, the targeting of Anonymous with the very same DDoS attacks they accuse “hacktivists” of using, the use of “honey traps” (luring people into compromising situations using sex) and destructive viruses. But, here, I want to focus and elaborate on the overarching point revealed by all of these documents: namely, that these agencies are attempting to control, infiltrate, manipulate, and warp online discourse, and in doing so, are compromising the integrity of the internet itself.
Among the core self-identified purposes of JTRIG are two tactics: (1) to inject all sorts of false material onto the internet in order to destroy the reputation of its targets; and (2) to use social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable. To see how extremist these programs are, just consider the tactics they boast of using to achieve those ends: “false flag operations” (posting material to the internet and falsely attributing it to someone else), fake victim blog posts (pretending to be a victim of the individual whose reputation they want to destroy), and posting “negative information” on various forums. Here is one illustrative list of tactics from the latest GCHQ document we’re publishing today:
READ MORE:
https://theintercept.com/2014/02/24/jtrig-manipulation/
MEANWHILE:
BY ALAN MACLEOD
A MintPress study has found that hundreds of former agents of the notorious Israeli spying organization, Unit 8200, have attained positions of influence in many of the world’s biggest tech companies, including Google, Facebook, Microsoft and Amazon.
The Israeli Defense Forces’ (IDF) Unit 8200 is infamous for surveilling the indigenous Palestinian population, amassing kompromat on individuals for the purposes of blackmail and extortion. Spying on the world’s rich and famous, Unit 8200 hit the headlines last year, after the Pegasus scandal broke. Former Unit 8200 officers designed and implemented software that spied on tens of thousands of politicians and likely aided in the killing of Saudi journalist Jamal Khashoggi.
GOOGLE
According to employment website LinkedIn, there are currently at least 99 former Unit 8200 veterans currently working for Google. This number almost certainly underestimates the scale of the collaboration between the two organizations, however. For one, this does not count former Google employees. Nor does it include those without a public LinkedIn account, or those who do have an account, but have not disclosed their previous affiliations with the high-tech Israeli surveillance unit. This is likely to be a considerable number, as agents are expressly prohibited from ever revealing their affiliation to Unit 8200. Thus, the figure of 99 only represents the number of current (or extremely recent) Google employees who are brazenly flouting Israeli military law by including the organization in their profiles.
Among these include:
Gavriel Goidel: Between 2010 and 2016, Goidel served in Unit 8200, rising to become Head of Learning at the organization, leading a large team of operatives who sifted through intelligence data to “understand patterns of hostile activists”, in his own words, transmitting that information to superiors. Whether this included any of the over 1000 Gazan civilians Israel killed during their 2014 bombardment of Gaza is unknown. Goidel was recently appointed Head of Strategy and Operations at Google.
Jonathan Cohen: Cohen was a team leader during his time in Unit 8200 (2000-2003). He has since spent more than 13 years working for Google in various senior positions, and is currently Head of Insights, Data and Measurement.
Ori Daniel: Between 2003 and 2006, Daniel was a technical operations specialist with Unit 8200. After a stint with Palantir, he joined Google in 2018, rising to become Head of Global Self-Service for Google Waze.
Ben Bariach: For nearly five years between 2007 and 2011, Bariach served as a cyber intelligence officer, where he “commanded strategic teams of elite officers and professionals.”Since 2016, he has worked for Google. Between 2018 and 2020, he concentrated on tackling “controversial content, disinformation and cyber-security”. Today, he is a product partnership manager for Google in London.
Notably, Google appears to not only accept former Unit 8200 agents with open arms, but to actively recruit current members of the controversial organization. For example, in October 2020, Gai Gutherz left his job as a project leader at Unit 8200 and walked into a full time job at Google as a software engineer. In 2018, Lior Liberman appears to have done the same thing, taking a position as a program manager at Google after 4 years in military intelligence. Earlier this year, she left Google and now works at Microsoft.
SPYING ON PALESTINIANS
Some might contend that all Israelis are compelled to complete military service, and so, therefore, what is the problem with young people using the tech skills they learned in the IDF in civilian life. In short, why is this Unit 8200-to-Silicon-Valley-pipeline a problem?
To begin with, Unit 8200 is not a run-of-the-mill regiment. Described as “Israel’s NSA” and located on a gigantic base near Beer Sheva in the Negev desert, Unit 8200 is the IDF’s largest unit – and one of its most exclusive. The brightest young minds in the country compete to be sent to serve at this Israeli Harvard. Although military service is compulsory for Jewish Israelis, Arab citizens are strongly discouraged from joining the military and are effectively blocked from Unit 8200. Indeed, they are the prime targets of the apartheid state’s surveillance operations.
The Financial Times called Unit 8200 “Israel at its best and worst” – the centerpiece of both its burgeoning high-tech industry and of its repressive state apparatus. Unit 8200 veterans have gone on to produce many of the world’s most downloaded apps, including maps service Waze, and communications app Viber. But in 2014, 43 reservists, including several officers, sent a letter to Prime Minister Benjamin Netanyahu, informing him they would no longer serve in its ranks due to its involvement in the political persecution of Palestinians.
This consisted of using big data to compile dossiers on huge numbers of the indigenous domestic population, including their medical history, sex lives, and search histories, in order that it could be used for extortion later. If a certain individual needed to travel across checkpoints for crucial medical treatment, permission could be suspended until they complied. Information, such as if a person was cheating on their spouse or was homosexual, is also used as bait for blackmail. One former Unit 8200 man said that as part of his training, he was assigned to memorize different Arabic words for “gay” so that he could listen out for them in conversations.
READ MORE:
https://www.unz.com/article/revealed-the-former-israeli-spies-working-in-top-jobs-at-google-facebook-and-microsoft/
READ FROM TOP.
FREE JULIAN ASSANGE NOW ≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥≥