Posts tagged MR. SOEWARNO



USA: Man gets 4 years for $1.7M investment fraud scheme | Global Corruption |
Gregg E. Steinnagel


A Chicago man was sentenced to four years in prison Friday for a bogus investment scheme that swindled $1.7 million from at least 20 people.

Gregg E. Steinnagel, 54, pleaded guilty last November to one count of wire fraud, according to a statement from the U.S. attorney’s office.

He was sentenced to 48 months in prison at a hearing Friday before U.S. District Judge John Z. Lee, the statement said.

In sentencing paperwork filed by prosecutors, one of the victims said she trusted her life savings to Steinnagel and his now-deceased cohort Jeffrey Fazzio. The pair promised her and other victims their money would be invested and repaid at high rates of interest with no risk of loss.

Instead, Steinnagel and Fazzio actually kept most of the money and used some of it to gamble at casinos in the Chicago area, Florida and Nevada, prosecutors said.

They provided victims fabricated promissory notes purportedly signed by the owners of real estate securing the victims’ investments, prosecutors said.

They also strung victims along with occasional cash payments of several hundred or several thousand dollars to gain their trust and persuade them to invest even larger amounts of money, prosecutors said.

At the time of the fraud, Steinnagel worked as a truck driver while Fazzio was a restaurant and department store worker who posed as an attorney, prosecutors said.

Steinnagel was also ordered to pay $1,700,000 in restitution to 20 of his victims, the statement said. He will begin serving his sentence July 10.



ABOVE – Princess Cristina
BELOW –  Inaki Urdangarin


A Spanish judge Monday took a step towards seizing the assets of Cristina, sister of King Felipe VI, pending her trial in a fraud scandal that has shamed the royal family.

It said Cristina, 49, had failed to pay a 2.7 million euro ($3 million) court bond to cover her liability in the case. It ordered her to hand over a list of her assets with a view to impounding them.

Cristina Federica de Borbon y Grecia is the first member of Spain’s royal family ever to be sent to the dock. A date has yet to be set for her trial.

In a written ruling Monday, Judge Jose Castro said the deadline for the bond, ordered in December, had long passed and Cristina still had not paid it.

He ordered her and 10 other defendants to present within three days details of their “current accounts, deposits, financial assets and real estate”.

Cristina, whose official title is the Infanta, is accused of taking part in tax evasion by her husband, the former Olympic handball player Inaki Urdangarin.

Cristina’s lawyers say she is innocent of any wrongdoing and that she trusted her husband to handle their financial affairs.

Urdangarin is accused along with a former business partner of creaming off six million euros in public funds from contracts awarded to Noos, a charitable foundation which he chaired.

Public prosecutors had called on the court to shelve the case, saying there was not enough evidence against Cristina. They hinted that investigators were out to get the princess.

The so-called Noos scandal has fanned public anger against the ruling class during the recent years of economic hardship in Spain.

It soured the reign of Felipe’s father Juan Carlos, who gave up the throne in June 2014 after 39 years, hoping his son could freshen up the image of the monarchy.



Jack Utsick


— A Nov. 2 trial date has been set for a former concert promoter extradited to the U.S. from Brazil to face charges he operated a $300 million fraud scheme.

U.S. District Judge Cecilia M. Altonaga set the date at a hearing Tuesday for Jack Utsick, who has pleaded not guilty to fraud and money laundering charges.

Prosecutors say Utsick operated his Worldwide Entertainment Inc. company as a Ponzi scheme, repaying older investors with money from newer ones. The scheme allegedly defrauded an estimated 3,300 investors out of nearly $300 million.

The company staged tours of top-name acts including the Rolling Stones, Elton John, Aerosmith and the Pretenders, among others.

Utsick was extradited from Brazil last year after a lengthy court battle. U.S. authorities say he fled there to avoid prosecution.

Entertainment Promoter Extradited from Brazil to Face Charges in $300 Million Securities Fraud Scheme

John P. (Jack) Utsick, who in 2005, according to Billboard Magazine, was the third-largest concert promoter and entertainment manager, made his initial appearance today. Utsick was extradited from Brazil to the United States as a result of his alleged involvement in a Ponzi scheme that defrauded investors out of approximately $300 million.

Wifredo A. Ferrer, United States Attorney for the Southern District of Florida, and George L. Piro, Special Agent in Charge, Federal Bureau of Investigation (FBI), Miami Field Office, made the announcement.

In 2006, John P. (Jack) Utsick, 72, formerly of Miami Beach, fled South Florida to Brazil after the U.S. Securities and Exchange Commission (SEC) filed civil securities fraud charges against him, and after he became aware of a related FBI investigation.

According to a Superseding Indictment filed November 30, 2010, that was unsealed by court order on August 26, 2014, and documents filed in the SEC’s fraud case:

Utsick engaged in a scheme that defrauded more than 3,300 investors out of approximately $300 million. These investors believed that their monies were used to fund Utsick’s concert promotion business, Worldwide Entertainment, Inc. and The Entertainment Group Fund, Inc. (Worldwide), which Utsick operated from at least 1998 through late 2005. As alleged in court documents, Utsick promised investors fixed rates of return ranging from 15% to 25% and, in some instances, an additional percentage of the profits generated by Utsick and his companies related to specific concert events or tours of specific artists. Many investors were encouraged to roll over their principal and purported “profits” from project to project.

As alleged in court documents, most of the entertainment projects lost money and, as a result, Utsick paid earlier investors with funds raised from new investors. Utsick also used investor funds for other activities that were not disclosed to investors, including for his own personal stock and options trading, the purchase of two multimillion dollar condominiums in Miami Beach, a yacht, and to fund a motion picture, “National Lampoon’s Pledge This!,” starring Paris Hilton.

Utsick produced events and concert tours for numerous artists, including Coldplay, The Rolling Stones, Elton John, Aerosmith, Luis Miguel, and Juanes.

After the SEC filed its securities fraud action in 2006, and Utsick was made aware of the criminal investigation, he fled to Brazil. According to court filings in the SEC case, Utsick went to great lengths to avoid providing evidence to the SEC or accounting for the disappearance of millions of dollars that had been raised from investors.

After the U.S. Department of Justice initiated extradition proceedings in Brazil, Utsick challenged the validity of the indictment in the Brazilian courts. In August 2014, the Supreme Court of Brazil ordered that Utsick be extradited to the United States. Utsick was extradited from Brazil on December 6, 2014, and was taken into custody by the U.S. Marshal’s Service and transported to Miami. He made his initial appearance today before U.S. Magistrate Judge Jonathan Goodman.

Utsick is charged with eight counts of mail fraud, in violation of Title 18 United States Code, Section 1341. He faces a statutory maximum term of twenty years in prison as to each count. The case is assigned to U.S. District Judge Cecilia M. Altonaga for further proceedings.

Mr. Ferrer commended the investigative efforts of the FBI, the assistance provided by the SEC’s Miami Regional Office, and the efforts of the U.S. Marshal’s Service to assist with the arrest of the defendant. The matter is being prosecuted by Assistant U.S. Attorney Jerrob Duffy.

A copy of this press release may be found on the website of the United States Attorney’s Office for the Southern District of Florida at Related court documents and information may be found on the website of the District Court for the Southern District of Florida at www.flsd.uscourts.govor on



ABOVE – Kenneth Rijock
BELOW – Gary James Lundgren


Reliable sources in Panama, responding to our series of investigative articles on Gary James Lundgren and the broker-dealer that he owns, and illegally operates in Panama, Interpacific Investors Services, Inc., which is not licensed to sell securities in Panama, have furnished the names of two additional Panama corporations , both reportedly controlled by Lundgren, and previously unknown to this blogger.

These corporations are:

(1) Global Bond Investors, SA.

(2) Global Realty Investments, SA.

As you can see, he continues to use companies whose names are deceptively similar to entities owned by ex-President Ricardo Martinelli (e.g. Global Bank), in order to create the false impression that Martinelli either owns them outright, or controls them. Lundgren has been touting his close relationship with the former president, who is now facing massive corruption charges, and fled Panama to avoid arrest and prosecution. Investors could have been deceived into thinking that they were doing business with Panama’s sitting president, when this was clearly not true.

The disclosure of these new corporate entities raises additional questions about Lundgren’s dodgy securities business;

(1) Were worthless bonds sold in Panama, to American and Canadian nationals ?
(2) Do real property investments that Lundgren sold in Panama qualify to be treated as securities, under the generally accepted definition ?
(3) Did Lundgren create bogus bond and real estate documents, to demonstrate ownership, to entice victims to invest, with the intent to defraud his clients ?

We shall continue our investigation, and report back to our readers shortly; stay tuned.





Most people realize that emails and other digital communications they once considered private can now become part of their permanent record.

But even as they increasingly use apps that understand what they say, most people don’t realize that the words they speak are not so private anymore, either.

Top-secret documents from the archive of former NSA contractor Edward Snowden show the National Security Agency can now automatically recognize the content within phone calls by creating rough transcripts and phonetic representations that can be easily searched and stored.

The documents show NSA analysts celebrating the development of what they called “Google for Voice” nearly a decade ago.

Though perfect transcription of natural conversation apparently remains the Intelligence Community’s “holy grail,” the Snowden documentsdescribe extensive use of keyword searching as well as computer programs designed to analyze and “extract” the content of voice conversations, and even use sophisticated algorithms to flag conversations of interest.

The documents include vivid examples of the use of speech recognition in war zones like Iraq and Afghanistan, as well as in Latin America. But they leave unclear exactly how widely the spy agency uses this ability, particularly in programs that pick up considerable amounts of conversations that include people who live in or are citizens of the United States.

Spying on international telephone calls has always been a staple of NSA surveillance, but the requirement that an actual person do the listening meant it was effectively limited to a tiny percentage of the total traffic. By leveraging advances in automated speech recognition, the NSA has entered the era of bulk listening.

And this has happened with no apparent public oversight, hearings or legislative action. Congress hasn’t shown signs of even knowing that it’s going on.

The USA Freedom Act — the surveillance reform bill that Congress is currently debating — doesn’t address the topic at all. The bill would end an NSA program that does not collect voice content: the government’s bulk collection of domestic calling data, showing who called who and for how long.

Even if becomes law, the bill would leave in place a multitude of mechanisms exposed by Snowden that scoop up vast amounts of innocent people’s text and voice communications in the U.S. and across the globe.

Civil liberty experts contacted by The Intercept said the NSA’s speech-to-text capabilities are a disturbing example of the privacy invasions that are becoming possible as our analog world transitions to a digital one.

“I think people don’t understand that the economics of surveillance have totally changed,” Jennifer Granick, civil liberties director at the Stanford Center for Internet and Society, told The Intercept.

“Once you have this capability, then the question is: How will it be deployed? Can you temporarily cache all American phone calls, transcribe all the phone calls, and do text searching of the content of the calls?” she said. “It may not be what they are doing right now, but they’ll be able to do it.”

And, she asked: “How would we ever know if they change the policy?”

Indeed, NSA officials have been secretive about their ability to convert speech to text, and how widely they use it, leaving open any number of possibilities.

That secrecy is the key, Granick said. “We don’t have any idea how many innocent people are being affected, or how many of those innocent people are also Americans.”

I Can Search Against It

NSA whistleblower Thomas Drake, who was trained as a voice processing crypto-linguist and worked at the agency until 2008, told The Intercept that he saw a huge push after the September 11, 2001 terror attacks to turn the massive amounts of voice communications being collected into something more useful.

Human listening was clearly not going to be the solution. “There weren’t enough ears,” he said.

The transcripts that emerged from the new systems weren’t perfect, he said. “But even if it’s not 100 percent, I can still get a lot more information. It’s far more accessible. I can search against it.”

Converting speech to text makes it easier for the NSA to see what it has collected and stored, according to Drake. “The breakthrough was being able to do it on a vast scale,” he said.

More Data, More Power, Better Performance

The Defense Department, through its Defense Advanced Research Projects Agency (DARPA), started funding academic and commercial research into speech recognition in the early 1970s.

What emerged were several systems to turn speech into text, all of which slowly but gradually improved as they were able to work with more data and at faster speeds.

In a brief interview, Dan Kaufman, director of DARPA’s Information Innovation Office, indicated that the government’s ability to automate transcription is still limited.

Kaufman says that automated transcription of phone conversation is “super hard,” because “there’s a lot of noise on the signal” and “it’s informal as hell.”

“I would tell you we are not very good at that,” he said.

In an ideal environment like a news broadcast, he said, “we’re getting pretty good at being able to do these types of translations.”

A 2008 document from the Snowden archive shows that  transcribing news broadcasts was already working well seven years ago, using a program called Enhanced Video Text and Audio Processing:

(U//FOUO) EViTAP is a fully-automated news monitoring tool. The key feature of this Intelink-SBU-hosted tool is that it analyzes news in six languages, including Arabic, Mandarin Chinese, Russian, Spanish, English, and Farsi/Persian. “How does it work?” you may ask. It integrates Automatic Speech Recognition (ASR) which provides transcripts of the spoken audio. Next, machine translation of the ASR transcript translates the native language transcript to English. Voila! Technology is amazing.

A version of the system the NSA uses is now even available commercially.

Experts in speech recognition say that in the last decade or so, the pace of technological improvement has been explosive. As information storage became cheaper and more efficient, technology companies were able to store massive amounts of voice data on their servers, allowing them to continually update and improve the models. Enormous processors, tuned as “deep neural networks” that detect patterns like human brains do, produce much cleaner transcripts.

And the Snowden documents show that the same kinds of leaps forward seen in commercial speech-to-text products have also been happening in secret at the NSA, fueled by the agency’s singular access to astronomical processing power and its own vast data archives.

In fact, the NSA has been repeatedly releasing new and improved speech recognition systems for more than a decade.

The first-generation tool, which made keyword-searching of vast amounts of voice content possible, was rolled out in 2004 and code-named RHINEHART.

“Voice word search technology allows analysts to find and prioritize intercept based on its intelligence content,” says an internal 2006 NSA memo entitled “For Media Mining, the Future Is Now!

The memo says that intelligence analysts involved in counterterrorism were able to identify terms related to bomb-making materials, like “detonator” and “hydrogen peroxide,” as well as place names like “Baghdad” or people like “Musharaf.”

RHINEHART was “designed to support both real-time searches, in which incoming data is automatically searched by a designated set of dictionaries, and retrospective searches, in which analysts can repeatedly search over months of past traffic,” the memo explains (emphasis in original).

As of 2006, RHINEHART was operating “across a wide variety of missions and languages” and was “used throughout the NSA/CSS [Central Security Service] Enterprise.”

But even then, a newer, more sophisticated product was already being rolled out by the NSA’s Human Language Technology (HLT) program office. The new system, called VoiceRT, was first introduced in Baghdad, and “designed to index and tag 1 million cuts per day.”

The goal, according to another 2006 memo, was to use voice processing technology to be able “index, tag and graph,” all intercepted communications. “Using HLT services, a single analyst will be able to sort through millions of cuts per day and focus on only the small percentage that is relevant,” the memo states.

A 2009 memo from the NSA’s British partner, GCHQ, describes how “NSA have had the BBN speech-to-text system Byblos running at Fort Meade for at least 10 years. (Initially they also had Dragon.) During this period they have invested heavily in producing their own corpora of transcribed Sigint in both American English and an increasing range of other languages.” (GCHQ also noted that it had its own small corpora of transcribed voice communications, most of which happened to be “Northern Irish accented speech.”)

VoiceRT, in turn, was surpassed a few years after its launch. According to the intelligence community’s “Black Budget” for fiscal year 2013, VoiceRT was decommissioned and replaced in 2011 and 2012, so that by 2013, NSA could operationalize a new system. This system, apparently called SPIRITFIRE, could handle more data, faster. SPIRITFIRE would be “a more robust voice processing capability based on speech-to-text keyword search and paired dialogue transcription.”

Extensive Use Abroad

Voice communications can be collected by the NSA whether they are being sent by regular phone lines, over cellular networks, or through voice-over-internet services. Previously released documents from the Snowden archive describe enormous efforts by the NSA during the last decade to get access to voice-over-internet content like Skype calls, for instance. And other documents in the archive chronicle the agency’s adjustment to the fact that an increasingly large percentage of conversations, even those that start as landline or mobile calls, end up as digitized packets flying through the same fiber-optic cables that the NSA taps so effectively for other data and voice communications.

The Snowden archive, as searched and analyzed by The Intercept, documents extensive use of speech-to-text by the NSA to search through international voice intercepts — particularly in Iraq and Afghanistan, as well as Mexico and Latin America.

For example, speech-to-text was a key but previously unheralded element of the sophisticated analytical program known as the Real Time Regional Gateway (RTRG), which started in 2005 when newly appointed NSA chief Keith B. Alexander, according to the Washington Post, “wanted everything: Every Iraqi text message, phone call and e-mail that could be vacuumed up by the agency’s powerful computers.”

The Real Time Regional Gateway was credited with playing a role in “breaking up Iraqi insurgent networks and significantly reducing the monthly death toll from improvised explosive devices.” The indexing and searching of “voice cuts” was deployed to Iraq in 2006. By 2008, RTRG was operational in Afghanistan as well.

Keyword spotting extended to Iranian intercepts as well. A 2006 memoreported that RHINEHART had been used successfully by Persian-speaking analysts who “searched for the words ‘negotiations’ or ‘America’ in their traffic, and RHINEHART located a very important call that was transcribed verbatim providing information on an important Iranian target’s discussion of the formation of a the new Iraqi government.”

According to a 2011 memo, “How is Human Language Technology (HLT) Progressing?“, NSA that year deployed “HLT Labs” to Afghanistan, NSA facilities in Texas and Georgia, and listening posts in Latin America run by the Special Collection Service, a joint NSA/CIA unit that operates out of embassies and other locations.

“Spanish is the most mature of our speech-to-text analytics,” the memo says, noting that the NSA and its Special Collections Service sites in Latin America, have had “great success searching for Spanish keywords.”

The memo offers an example from NSA Texas, where an analyst newly trained on the system used a keyword search to find previously unreported information on a target involved in drug-trafficking. In another case, an official at a Special Collection Service site in Latin America “was able to find foreign intelligence regarding a Cuban official in a fraction of the usual time.”

In a 2011 article, “Finding Nuggets — Quickly — in a Heap of Voice Collection, From Mexico to Afghanistan,” an intelligence analysis technical director from NSA Texas described the “rare life-changing instance” when he learned about human language technology, and its ability to “find the exact traffic of interest within a mass of collection.”

Analysts in Texas found the new technology a boon for spying. “From finding tunnels in Tijuana, identifying bomb threats in the streets of Mexico City, or shedding light on the shooting of US Customs officials in Potosi, Mexico, the technology did what it advertised: It accelerated the process of finding relevant intelligence when time was of the essence,” he wrote. (Emphasis in original.)

The author of the memo was also part of a team that introduced the technology to military leaders in Afghanistan. “From Kandahar to Kabul, we have traveled the country explaining NSA leaders’ vision and introducing SIGINT teams to what HLT analytics can do today and to what is still needed to make this technology a game-changing success,” the memo reads.

Extent of Domestic Use Remains Unknown

What’s less clear from the archive is how extensively this capability is used to transcribe or otherwise index and search voice conversations that primarily involve what the NSA terms “U.S. persons.”

The NSA did not answer a series of detailed questions about automated speech recognition, even though an NSA “classification guide” that is part of the Snowden archive explicitly states that “The fact that NSA/CSS has created HLT models” for speech-to-text processing as well as gender, language and voice recognition, is “UNCLASSIFIED.”

Also unclassified: The fact that the processing can sort and prioritize audio files for human linguists, and that the statistical models are regularly being improved and updated based on actual intercepts. By contrast, because they’ve been tuned using actual intercepts, the specific parameters of the systems are highly classified.

“The National Security Agency employs a variety of technologies in the course of its authorized foreign-intelligence mission,” spokesperson Vanee’ Vines wrote in an email to The Intercept. “These capabilities, operated by NSA’s dedicated professionals and overseen by multiple internal and external authorities, help to deter threats from international terrorists, human traffickers, cyber criminals, and others who seek to harm our citizens and allies.”

Vines did not respond to the specific questions about privacy protections in place related to the processing of domestic or domestic-to-international voice communications. But she wrote that “NSA always applies rigorous protections designed to safeguard the privacy not only of U.S. persons, but also of foreigners abroad, as directed by the President in January 2014.”

The presidentially appointed but independent Privacy and Civil Liberties Oversight Board (PCLOB) didn’t mention speech-to-text technology in itspublic reports.

“I’m not going to get into whether any program does or does not have that capability,” PCLOB chairman David Medine told The Intercept.

His board’s reports, he said, contained only information that the intelligence community agreed could be declassified.

“We went to the intelligence community and asked them to declassify a significant amount of material,” he said. The “vast majority” of that material was declassified, he said. But not all — including “facts that we thought could be declassified without compromising national security.”

Hypothetically, Medine said, the ability to turn voice into text would raise significant privacy concerns. And it would also raise questions about how the intelligence agencies “minimize” the retention and dissemination of material— particularly involving U.S. persons — that doesn’t include information they’re explicitly allowed to keep.

“Obviously it increases the ability of the government to process information from more calls,” Medine said. “It would also allow the government to listen in on more calls, which would raise more of the kind of privacy issues that the board has raised in the past.”

“I’m not saying the government does or doesn’t do it,” he said, “just that these would be the consequences.”

A New Learning Curve

Speech recognition expert Bhiksha Raj likens the current era to the early days of the Internet, when people didn’t fully realize how the things they typed would last forever.

“When I started using the Internet in the 90s, I was just posting stuff,” said Raj, an associate professor at Carnegie Mellon University’s Language Technologies Institute. “It never struck me that 20 years later I could go Google myself and pull all this up. Imagine if I posted something on or something like that, and now that post is going to embarrass me forever.”

The same is increasingly becoming the case with voice communication, he said. And the stakes are even higher, given that the majority of the world’s communication has historically been conducted by voice, and it has traditionally been considered a private mode of communication.

“People still aren’t realizing quite the magnitude that the problem could get to,” Raj said. “And it’s not just surveillance,” he said. “People are using voice services all the time. And where does the voice go? It’s sitting somewhere. It’s going somewhere. You’re living on trust.” He added: “Right now I don’t think you can trust anybody.”

The Need for New Rules

Kim Taipale, executive director of the Stilwell Center for Advanced Studies in Science and Technology Policy, is one of several people who tried a decade ago to get policymakers to recognize that existing surveillance law doesn’t adequately deal with new global communication networks and advanced technologies including  speech recognition.

“Things aren’t ephemeral anymore,” Taipale told The Intercept. “We’re living in a world where many things that were fleeting in the analog world are now on the permanent record. The question then becomes: what are the consequences of that and what are the rules going to be to deal with those consequences?”

Realistically, Taipale said, “the ability of the government to search voice communication in bulk is one of the things we may have to live with under some circumstances going forward.” But there at least need to be “clear public rules and effective oversight to make sure that the information is only used for appropriate law-enforcement or national security purposes consistent with Constitutional principles.”

Ultimately, Taipale said, a system where computers flag suspicious voice communications could be less invasive than one where people do the listening, given the potential for human abuse and misuse to lead to privacy violations. “Automated analysis has different privacy implications,” he said.

But to Jay Stanley, a senior policy analyst with the ACLU’s Speech, Privacy and Technology Project, the distinction between a human listening and a computer listening is irrelevant in terms of privacy, possible consequences, and a chilling effect on speech.

“What people care about in the end, and what creates chilling effects in the end, are consequences,” he said. “I think that over time, people would learn to fear computerized eavesdropping just as much as they fear eavesdropping by humans, because of the consequences that it could bring.”

Indeed, computer listening could raise new concerns. One of the internal NSA memos from 2006 says an “important enhancement under development is the ability for this HLT capability to predict what intercepted data might be of interest to analysts based on the analysts’ past behavior.”

Citing Amazon’s ability to not just track but predict buyer preferences, the memo says that an NSA system designed to flag interesting intercepts “offers the promise o