The dawn of a post-privacy era

This post is a compiled snippet from a Twitter thread that I've transported over here for posterity. The original thread started here: https://twitter.com/b_cavello/status/1157823454424395777

When I was an undergrad, I joined my school’s Computer Security Group (aka hacker club).
I was one the the only (if not THE only) non-CS student in the group, but everyone was super welcoming and generous with their knowledge.

Each couple of weeks, the hacker club would have presentations from members and guest speakers on different topics, often from the history of infosec.
Topics included things like early cyphers, lock-picking, and steganography.

Steganography (or “stego”) was one of my favorite topics because it felt like such a perfect embodiment of “hiding in plain sight”
Especially “least significant bit” stego, which just sliiightly modifies original file data like the color of some pixels
en.wikipedia.org/wiki/Steganography

Brief tangent: now there are some interesting intersections of old-school hacker fun and AI/computer vision with adversarial examples. I think this space is still yet underdeveloped as a purpose-built technique, but I think it has some promise!

more to the point:
Even though old-school cyphers and techniques like steganography are cool and fun to learn about, they’re not good security.

One of the early lessons I learned from my hacker mentors was that “security through obscurity” is not good security at all.

Apparently the critiques of “security through obscurity” date back to even pre-1900s! 🙀
en.wikipedia.org/wiki/Security_through_obscurity

Wikipedia entry reads: An early opponent of security through obscurity was the locksmith Alfred Charles Hobbs, who in 1851 demonstrated to the public how state-of-the-art locks could be picked and who, in response to concerns that exposing security flaws in the design of locks could make them more vulnerable to criminals, said: "Rogues are very keen in their profession, and know already much more than we can teach them."

I mention all this because I think “security through obscurity” is actually our de facto operating procedure. Our world is built on the assumption that the effort it would take to threaten our security (as individuals) is just... not worth it.

The issue is that the cost of information capture & retrieval has dropped DRAMATICALLY

The resolution with which we can record video, for instance, is ridiculous, and the tech is ubiquitous (in most of our pockets). The ability to search back thru footage is increasingly as well

Technology is awesome
Both in the modern and historical sense of the word
But society has not kept up with the breakneck rate of change. We still exist under the assumption of security through obscurity, but that obscurity is an illusion.

By cross referencing just a few different foggy data sources, we can have crystal clear precision on things like individual location.
With a pretty cheap webcam and image classifier, one can easily sort through hours of coffee shop footage to find hundreds of login credentials.

I don’t say this to be scary (although it may feel scary, and that feeling is reasonable).
I think that we as people, as cultures, need to make informed, thoughtful decisions about how (or if) we want to adapt to this new post-obscurity era.

People need to understand the capabilities that exist, but perhaps even more so, we need to design our structures and our institutions around these realities.

Progress often depends on the imperfect enforcement of rules. What are we doing to ensure that we have room to change?