Биометрички и дигитални лични податоци, можности за злоупотреба и нарушување на правото на приватност

Vanlok

deus ex machina
Член од
30 мај 2009
Мислења
25.397
Поени од реакции
33.512
Првиот од парот написи кои ги посочувам како „храна за мислата“ и поттик за дискусија:


The Genetic Panopticon: We're All Suspects In A DNA Lineup, Waiting To Be Matched With A Crime

“Solving unsolved crimes is a noble objective, but it occupies a lower place in the American pantheon of noble objectives than the protection of our people from suspicionless law-enforcement searches… Make no mistake about it…your DNA can be taken and entered into a national DNA database if you are ever arrested, rightly or wrongly, and for whatever reason… Perhaps the construction of such a genetic panopticon is wise. But I doubt that the proud men who wrote the charter of our liberties would have been so eager to open their mouths for royal inspection.”
- Justice Antonin Scalia dissenting in Maryland v. King
Be warned: the DNA detectives are on the prowl.




Whatever skeletons may be lurking on your family tree or in your closet, whatever crimes you may have committed, whatever associations you may have with those on the government’s most wanted lists: the police state is determined to ferret them out.

In an age of overcriminalization, round-the-clock surveillance, and a police state eager to flex its muscles in a show of power, we are all guilty of some transgression or other.
No longer can we consider ourselves innocent until proven guilty.
Now we are all suspects in a DNA lineup waiting to be matched up with a crime.

Suspect State, meet the Genetic Panopticon.
DNA technology in the hands of government officials will complete our transition to a Surveillance State in which prison walls are disguised within the seemingly benevolent trappings of technological and scientific progress, national security and the need to guard against terrorists, pandemics, civil unrest, etc.
By accessing your DNA, the government will soon know everything else about you that they don’t already know: your family chart, your ancestry, what you look like, your health history, your inclination to follow orders or chart your own course, etc.
It’s getting harder to hide, even if you think you’ve got nothing to hide.

Armed with unprecedented access to DNA databases amassed by the FBI and ancestry website, as well as hospital newborn screening programs, police are using forensic genealogy, which allows police to match up an unknown suspect’s crime scene DNA with that of any family members in a genealogy database, to solve cold cases that have remained unsolved for decades.
By submitting your DNA to a genealogical database such as Ancestry and 23andMe, you’re giving the police access to the genetic makeup, relationships and health profiles of every relative—past, present and future—in your family, whether or not they ever agreed to be part of such a database.
It no longer even matters if you’re among the tens of millions of people who have added their DNA to ancestry databases. As Brian Resnick reports, public DNA databases have grown so massive that they can be used to find you even if you’ve never shared your own DNA.
That simple transaction—a spit sample or a cheek swab in exchange for getting to learn everything about one’s ancestral makeup, where one came from, and who is part of one’s extended family—is the price of entry into the Suspect State for all of us.
After all, a DNA print reveals everything about “who we are, where we come from, and who we will be.” It can also be used to predict the physical appearance of potential suspects.
It’s what police like to refer to a “modern fingerprint.”

Whereas fingerprint technology created a watershed moment for police in their ability to “crack” a case, DNA technology is now being hailed by law enforcement agencies as the magic bullet in crime solving, especially when it helps them crack cold cases of serial murders and rapists.
After all, who wouldn’t want to get psychopaths and serial rapists off the streets and safely behind bars, right?
At least, that’s the argument being used by law enforcement to support their unrestricted access to these genealogy databases, and they’ve got the success stories to prove it.
For instance, a 68-year-old Pennsylvania man was arrested and charged with the brutal rape and murder of a young woman almost 50 years earlier. Relying on genealogical research suggesting that the killer had ancestors who hailed from a small town in Italy, investigators narrowed their findings down to one man whose DNA, obtained from a discarded coffee cup, matched the killer’s.
In another cold case investigation, a 76-year-old man was arrested for two decades-old murders after his DNA was collected from a breathalyzer during an unrelated traffic stop.
Yet it’s not just psychopaths and serial rapists who are getting caught up in the investigative dragnet. In the police state’s pursuit of criminals, anyone who comes up as a possible DNA match—including distant family members—suddenly becomes part of a circle of suspects that must be tracked, investigated and ruled out.
Victims of past crimes are also getting added to the government’s growing DNA database of potential suspects. For instance, San Francisco police used a rape victim’s DNA, which was on file from a 2016 sexual assault, to arrest the woman for allegedly being involved in a property crime that took place in 2021.
In this way, “guilt by association” has taken on new connotations in a technological age in which one is just a DNA sample away from being considered a person of interest in a police investigation. As Jessica Cussins warns in Psychology Today, “The fundamental fight—that data from potentially innocent people should not be used to connect them to unrelated crimes—has been lost.”
Until recently, the government was required to at least observe some basic restrictions on when, where and how it could access someone’s DNA. That was turned on its head by various U.S. Supreme Court rulings that heralded the loss of privacy on a cellular level.

For instance, the U.S. Supreme Court ruled in Maryland v. King that taking DNA samples from a suspect doesn’t violate the Fourth Amendment. The Court’s subsequent decision to let stand the Maryland Court of Appeals’ ruling in Raynor v. Maryland, which essentially determined that individuals do not have a right to privacy when it comes to their DNA, made Americans even more vulnerable to the government accessing, analyzing and storing their DNA without their knowledge or permission.
It’s all been downhill since then.

Indeed, the government has been relentless in its efforts to get hold of our DNA, either through mandatory programs carried out in connection with law enforcement and corporate America, by warrantlessly accessing our familial DNA shared with genealogical services such as Ancestry and 23andMe, or through the collection of our “shed” or “touch” DNA.
Get ready, folks, because the government has embarked on a diabolical campaign to create a nation of suspects predicated on a massive national DNA database.
This has been helped along by Congress (which adopted legislation allowing police to collect and test DNA immediately following arrests), President Trump (who signed the Rapid DNA Act into law), the courts (which have ruled that police can routinely take DNA samples from people who are arrested but not yet convicted of a crime), and local police agencies (which are chomping at the bit to acquire this new crime-fighting gadget).

For example, Rapid DNA machines—portable, about the size of a desktop printer, highly unregulated, far from fool-proof, and so fast that they can produce DNA profiles in less than two hours—allow police to go on fishing expeditions for any hint of possible misconduct using DNA samples.
Journalist Heather Murphy explains: “As police agencies build out their local DNA databases, they are collecting DNA not only from people who have been charged with major crimes but also, increasingly, from people who are merely deemed suspicious, permanently linking their genetic identities to criminal databases.”

All 50 states now maintain their own DNA government databases, although the protocols for collection differ from state to state. Increasingly, many of the data from local databanks are being uploaded to CODIS, the FBI’s massive DNA database, which has become a de facto way to identify and track the American people from birth to death.
Even hospitals have gotten in on the game by taking and storing newborn babies’ DNA, often without their parents’ knowledge or consent. It’s part of the government’s mandatory genetic screening of newborns. In many states, the DNA is stored indefinitely. There’s already a move underway to carry out whole genome sequencing on newborns, ostensibly to help diagnose rare diseases earlier and improve health later in life, which constitutes an ethical minefield all by itself.
What this means for those being born today is inclusion in a government database that contains intimate information about who they are, their ancestry, and what awaits them in the future, including their inclinations to be followers, leaders or troublemakers.

Just recently, in fact, police in New Jersey accessed the DNA from a nine-year-old blood sample of a newborn baby in order to identify the child’s father as a suspect in a decades-old sexual assault.
The ramifications of this kind of DNA profiling are far-reaching.
At a minimum, these DNA databases do away with any semblance of privacy or anonymity.
The lucrative possibilities for hackers and commercial entities looking to profit off one’s biological record are endless. It’s estimated that the global human identification market is projected to reach $6.5 billion by 2032.

These genetic databases and genomic technology also make us that much more vulnerable to creeps and cyberstalkers, genetic profiling, and those who would weaponize the technology against us.
Unfortunately, the debate over genetic privacy—and when one’s DNA becomes a public commodity outside the protection of the Fourth Amendment’s prohibition on warrantless searches and seizures—continues to lag far behind the government and Corporate America’s encroachments on our rights.
Moreover, while much of the public debate, legislative efforts and legal challenges in recent years have focused on the protocols surrounding when police can legally collect a suspect’s DNA (with or without a search warrant and whether upon arrest or conviction), the question of how to handle “shed” or “touch” DNA has largely slipped through without much debate or opposition.
As scientist Leslie A. Pray notes:
We all shed DNA, leaving traces of our identity practically everywhere we go… In fact, the garbage you leave for curbside pickup is a potential gold mine of this sort of material. All of this shed or so-called abandoned DNA is free for the taking by local police investigators hoping to crack unsolvable cases… shed DNA is also free for inclusion in a secret universal DNA databank.
(продолжува)
 

Vanlok

deus ex machina
Член од
30 мај 2009
Мислења
25.397
Поени од реакции
33.512
(продолжува)

What this means is that if you have the misfortune to leave your DNA traces anywhere a crime has been committed, you’ve already got a file somewhere in some state or federal database—albeit it may be a file without a name. As Heather Murphy warns in the New York Times: “The science-fiction future, in which police can swiftly identify robbers and murderers from discarded soda cans and cigarette butts, has arrived… Genetic fingerprinting is set to become as routine as the old-fashioned kind.
As the dissenting opinion to the Maryland Court of Appeals’ shed DNA ruling in Raynor rightly warned, A person can no longer vote, participate in a jury, or obtain a driver's license, without opening up his genetic material for state collection and codification.” Indeed, by refusing to hear the Raynor case, the U.S. Supreme Court gave its tacit approval for government agents to collect shed DNA, likening it to a person’s fingerprints or the color of their hair, eyes or skin.
It’s just a matter of time before government agents will know everywhere we’ve been and how long we were at each place by following our shed DNA. After all, scientists can already track salmon across hundreds of square miles of streams and rivers using DNA.
Today, helped along by robotics and automation, DNA processing, analysis and reporting takes far less time and can bring forth all manner of information, right down to a person’s eye color and relatives. Incredibly, one company specializes in creating “mug shots” for police based on DNA samples from unknown “suspects” which are then compared to individuals with similar genetic profiles.
Of course, none of these technologies are infallible.

DNA evidence can be wrong, either through human error, tampering, or even outright fabrication, and it happens more often than we are told.
What this amounts to is a scenario in which we have little to no defense against charges of wrongdoing, especially when “convicted” by technology, and even less protection against the government sweeping up our DNA in much the same way it sweeps up our phone calls, emails and text messages.
As I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, it’s only a matter of time before the police state’s pursuit of criminals from the past expands into genetic profiling and a preemptive hunt for criminals of the future.
 

Vanlok

deus ex machina
Член од
30 мај 2009
Мислења
25.397
Поени од реакции
33.512

Извадок:

Prioritizing ‘economic identity’
Next, the researchers looked at how an “identification for development” agenda driven by multiple global actors came into being.

They discussed the digital ID system called Aadhaar that is currently being tried out by the government of India and the digital ID system promoted by the World Bank — Identification for Development, commonly called the ID4D Initiative.

The ID4D Initiative draws inspiration from the highly criticized Aadhaar digital ID system in India.

In the Aadhaar system, individuals are voluntarily assigned a 12-digit random number by the Unique Identification Authority of India — a statutory authority backed by the government of India — that establishes the “uniqueness” of individuals with the help of demographic and biometric technologies.

This digital ID model, NYU report authors said, is dangerous because it prioritizes an “economic identity” for an individual.

The model is not about an individual’s identity alone, confirmed Joseph Atick, Ph.D., executive chairman of the influential ID4Africa, a platform where African governments and major companies in the digital ID market meet.

It’s about their economic interactions, Atick said.

The ID4D model “enables and interacts with authentication platforms, payments systems, digital signatures, data sharing, KYC systems, consent management and sectoral delivery platforms,” Atick announced at the start of ID4Africa’s 2022 annual meeting in mid-June, at the Palais de Congrès in Marrakesh, Morocco.

The authors of the NYU report criticized this model:

“The goal then, is not so much identity as it is identification. The three interlinked processes of identification, registration, and authorization are an exercise of power.
“Through this process, one actor acknowledges or denies another actor’s identity attributes. Individuals may be empowered through the process of identification, but such systems have long been used for the opposite purpose: to deny rights to certain groups and exclude them.”
 

Vanlok

deus ex machina
Член од
30 мај 2009
Мислења
25.397
Поени од реакции
33.512
Владите на ЕУ наскоро би можеле да ја поддржат доста контроверзната Регулатива за сексуална злоупотреба на деца (CSAR), колоквијално позната како „контрола на четот“, врз основа на новиот предлог на белгискиот министер за внатрешни работи. Според протечени податоци до кои дојде европратеникот од Пиратската партија, тоа би можело да се случи веќе во јуни.

Предлогот наложува дека корисниците на апликациите за комуникација мора да се согласат сите слики и видеа што ги испраќаат да бидат автоматски скенирани и потенцијално пријавени во ЕУ и полицијата.

Согласност ќе се добие преку текст за „услови за користење“или поп-ап порака. За да биде ова можно, и кај безбедните шифрирани услуги за пораки од „човек до човек“ ќе треба да се имплементира “бекдор“ мониторирање, што во суштина е крај на приватните пораки.
Заштита од сексуална злоупотреба на деца?
Додека истовремено промовираат љгбтп по градинки, училишта и факултети???

Какви лицемери мме.
Автоматски споено мислење:

Сакаат да ги заштитат децата?
ОК. Апсолутно! Ама нека почнат со заштита и забрана за лгбт фриковите да припаруваат до градинки/училишта. Нека почнат со забрана за лгбтп пропаганда по социјалните мрежи, медиумите и попкултурата. Итн.

Само не фаќаат на слаби точки за да се согласиме со губење на приватноста.
Треба масовно да ги бојкотираме популарните комуникациски софтвери, и пребацување на oпен сорс алтернативи.

Како на пример Сигнал. Не дека денес било што е сигурно, ама опен сорс барем значи дека ако има бекдор некој порано или подоцна ќе го најде тоа во кодот.
 
Последно уредено:

Vanlok

deus ex machina
Член од
30 мај 2009
Мислења
25.397
Поени од реакции
33.512

A report suggests that a major advertising company has used the microphone of devices to spy on users to deliver targeted ads. A report published by 404 Media spills the beans about the controversy.

The article is behind a paywall, but here's what it talks about.

Are your smart devices listening to you?
Cox Media Group (CMG), which is a partner of Meta (Facebook), Microsoft, Amazon, and Google, claimed that it can deliver targeted adverts based on what users were talking near device microphones. The company does this via something called Active Listening. Apparently, CMG not only openly admitted that it could use its technology to listen to what users say near smart devices, it actually advertised its capabilities on a web page. Do you get it? They were bragging about wiretapping, on their website!

A set of pitch deck slides, obtained by 404 Media, highlights the features of Active Listening. You can access an archive of it here. CMG deleted the web page where it had bragged about Active Listening, the last time this controversy arose. That does seem like an admission of guilt, but that's not enough evidence.

Speaking of which, there was a similar allegation last year, but it had been promptly denied by Google and Amazon. According to Ars Technica, Google pointed out that Android has restrictions in place to prevent apps from capturing audio when they are not used. Android displays an icon when an app accesses the microphone (iOS does this too). Amazon had stated that Echo devices were only engaged when the user speaks the wake word. It had also clarified that users could review their voice history from the Alexa app's settings, and see what kind of data was processed. What else were they going to say?


Techdirt points out that this wasn't the first time Cox tried something like this, the cable giant wanted to embed microphones and cameras in cable boxes in 2009, to monitor people. An article by The Byte linked to an archived version of a blog post from CMG outlining the capabilities of Active Listening. Gizmodo says that Amazon told them it has not worked with CMG on the program, and has no such plans. Meta also denied the allegations. Google has ended its partnership program with CMG after the report by 404 Media was published. Make of that what you will.

The new pitch deck, which you can find here, highlights some alarming things like how AI can collect and analyze voice data from 470+ sources, and that includes behavioral data which can identify an audience that is "ready-to-buy" the products. One of the diagrams in the slides feature a smartphone, a TV and a smart speaker, probably an example of the devices that are being used to listen to users. The "Predictive Audience Technology" can then use the data to develop an audience list, and target them via ads. Honestly, it sounds far-fetched. Is this even possible? Which app does it use to listen to users?

I'm not saying that CMG is innocent, or that 404 Media is exaggerating about the controversy. But we need some proper evidence to form an accusation, so far everything seems to be purely circumstantial, based on the slides. Was this technology used or not is something that will remain a mystery.

This isn't a new theory, I have heard about some rather unusual experiences for nearly a decade, which closely resemble the allegations. People whom I know personally have claimed to have experienced devices that were listening to them, alleging that their phone displayed ads about things that they were talking about.

A friend of mine had once jokingly claimed that Facebook was showing ads for places related to locations where she wanted to go, and that it was like magic. Was the phone listening to her? I argued that it wasn't funny, and that she must have looked up those cities on Google (or Google Maps) via a browser, which could have then resulted in cross-site tracking by other websites. That data could, in combination with IP tracking, may have been associated with the social account that was being used. The social network could then use the data on other devices that the person owned, which could then result in ads for hotels, restaurants, stores in those places. It has to be targeted ads, that is the only logical explanation. But, that is still creepy.


Don't get me wrong, I have no love for social networks or big tech, I don't trust them, and I certainly won't defend their practices. But if you think, and analyze the evidence, there's not much to go about such allegations.

These marketing companies could be listening to your mobile device's microphone, I can't say they do, or do not. But, this is how social networks learn about you. You share your favorite food, sports team, books, movies, school, college or workplace, likes and dislikes, photos and videos of you, your family, your shopping habits, medical history, geolocation, etc. That is how they sell the ads to you, by profiling you. You are the product!

I know I'm probably going to sound like a conspiracy theorist, but guess what big tech's next weapon is? AI chatbots that you interact with by text, or speak to. You may think those chatbots are cool and helpful. But you are feeding them with your data, they learn your preferences, and everything about you, all in the pretext of simplifying your tasks. What are you talking about? The AI is assisting me and making my life easier, everything is processed on my device, anonymized and end-to-end encrypted. Are you sure about that? All it needs is a simple knock from a Government agency, and a backdoor will appear. They are not just training AI language models using your data, it is a tunnel that leads directly into your life. Windows Recall may just be the tip of the iceberg. AI tools are the biggest threat to our privacy, even worse than ads and trackers, because you are willfully using them.





 

Kajgana Shop

На врв Bottom