I don't have a security fatigue, but I sure have 'privacy fatigue'.
I should worry about Google knowing this and that about me, I should worry about the stupid retargeting and the fact that if I do something online, it follows me through the web with banners and "youtube recommendations". And that everything is saved and googlable and everyone can know everything about me.
And I used to be worried, but now, I gave up. The assault of the security sucking companies is too high - Facebook and Google has the best engineers and everyone loves their open source code - and it's just way too convenient.
Learning about browser/machine fingerprinting is what finally broke me. I block ads, fight script execution, and enforce SSL, but it was one bridge too far. I can't conceal every trace of what my browser is sending, no matter what I do, and so I just accept that I'll be tracked.
On mobile, I turn off location, but I use Gmail and Android and Google search and I can't pretend I actually have any privacy left. Realistically, I don't see a way out worth taking.
And here I am preaching the value of privacy, but that's not the same as being able to make it happen.
" I can't conceal every trace of what my browser is sending, no matter what I do, and so I just accept that I'll be tracked."
It's worse than that. I determined early on in that game that just concealing stuff could be an identifier or profile in itself. The apathy of the crowds, little oligopolies that form in browsers/apps to make them more alike, and creative ways outliers try to be different combine to increase the odds outliers get identified given certain amount of data.
The original way I figured it out was when I was working on deployment strategies for both high-assurance SSL and Tor appliances. They HW/SW architecture should've made them immune to code injection with some protection against leaks. Sounds great right? Wrong. They'd be about the only nodes where the 0-days NSA et al likely had wouldn't work. That would instantly identify them as using the strong stuff. Low volume & niche offering means small enough for close inspection. Few that mattered to enemy would be so low in number so as to be easily targeted with other more personal attacks. Just being bulletproof was itself an identifier that led to them using something better than digital bullets.
Any solutions devolved from the strong security I could prove to a large degree to a cat and mouse game like others were playing but for different objectives. Seems a bit hopeless. Even software immune to hacks has to blend in if you're wanting privacy or dodging attention of nation states. So, the widely deployed stuff also needs that property at least for whatever components you're blending with. Didn't leave me optimistic...
I'm not sure the big companies actually track you based on browser/machine fingerprinting. It's certainly entirely possible that they could, but it's relatively expensive to develop and maintain the infrastructure that does so, all to track a bunch of paranoids who aren't lucrative subjects of targeted advertising.
Also, you're not fighting for the/your privacy of today- you're fighting for tomorrow's privacy. Facebook is scummy with your data, but Google (who knows hell of a lot more about you) is pretty nice with your data. The problems start when they won't be. Google knows what you buy, what's on your mind, which apps you open during the day and when, which people you have contact with and how often and when, they have your e-mail, they have your location and your usual routes, where you work and live. They also have aggregate data about stuff like traffic flow, festivals, businesses, etc. etc.
Now imagine them selling this data to the highest bidders. Insurance will know if you sport or not, if you eat healthy, if you smoke, etc.. Other companies will spring up that buy data to check if you were really sick for work, or just slacking (and businesses then buy this service for a price). Advertisers would sell their kids, mother and grandmother's soul for access to this kind of data. You think impulse-buy-optimization of supermarkets is bad? oh boy..
So yeah, fight the good fight, even if its a bit of a hassle, and slightly more expensive. You can actually find a good balance between privacy and usability. I use Protonmail for my mail, Apple Maps for my mapping needs (Google Maps as backup, sparingly), DuckDuckGo for search, Lastpass for passwords and iOS (Android is mined HARD by Google) + OS X as my OSes. Linux is even better, but you're constantly tweaking or fixing small things.
> Now imagine them selling this data to the highest bidders.
A bit of a moot point, because Google not selling it is their business model. What Google might start selling pretty soon are predictions about people (and in some way they already do it) - but never the underlying data.
If Google starts failing, then I'd become really worried though. If they're desperate enough they might just sell the data off (at their own loss of course, but a business fighting for survival will go to any lengths possible...)
The best way to "fight" would be to make alternatives to Google and Facebook. You're not gonna sway the masses any other way,
Typing something into your browser's address bar (which defaults to Google's search on every platform) remains the most convenient way to find something online. For the vast majority of people, Google effectively IS the internet.
Cold comfort, but better than most. I'd never looked into in-the-wild use of fingerprinting, but it does seem likely that people who've blocked cookies are terrible advertising targets to begin with.
A related Google story:- In Canada, I used to have a rooted nexus device which used to be my daily driver until I choose to switch over to Apple around 2 (maybe 3) or so years back. Before moving to Australia, I totally wiped it all up, and yesterday fired it up and setup a fake email account on Gmail to access Play store. It does not have a SIM card, but the fake email account asked for my Phone Number verification, and had my Canadian number already pre-filled. I was totally creeped out. I am out of explanation, except maybe connecting to my Canadian Wi-Fi address after wiping it all up (But wifi was used by 5 other people too). I have never used it to call anyone up after wiping it, and haven't had SIM card in it since. It runs Cyanogen too. You cannot deceive Google, it is futile. I will just stop at running ublock origin and ghostery in my browser, until they deceive me too. Nowadays, you cannot even create an email account on gmail or outlook without giving away your phone number. The scale is not titled towards us.
I recently downloaded the UberEats app on my iPhone. When I opened it for the first time, it pre-filled the email address I use for Uber and asked if I wanted to proceed with that. I don't know where they got this info from, but it sure creeped me out.
I would hope Apple would make an app ask permission before grabbing my email address from my contact card, and I have no idea how it could get this information from the Uber app.
I believe apps published by the same developer have the ability to share information transparently.
Google is doing this with their apps - sign into the gmail app on your iphone, and chrome, maps and youtube will ask you repeatedly to sign in using the same creds.
..and looks like it's not limited to same dev. Niantic published apps are also able to pull out my Google creds. Might be because I have a google account defined in iOS, might be because they're a former-Goog company and Goog have given them backdoor details.
Any devs on iOS able to shed any real light on my vague speculation?
Since iOS 8 apps can share data on device through so called App Groups if they are published with the same bundle identifier prefix. For that to happen I believe they have to be published by the same developer account.
When you purchase a nexus device, it's IMEI, effectively a serial number, can be linked to a Google Account so that the first boot experience can prefill stuff - I think this is optional but it's been a while.
Nowadays, you cannot even create an email account on gmail or outlook without giving away your phone number.
Do you plan on staying in one location, using the same IP address, for the rest of your life?
If not, then gmail will occasionally force you to use your phone number to "verify" that it's you. Of course, in fact what they really want to confirm is the association between your email and your phone.
This just happened to me a few days ago. I flew out of town. I logged in from a different city, Google forced me to verify by sending an SMS to my phone.
Google doesn't block access if you haven't activated 2FA, but that page asking of link to my phone number is annoying me since it's first appearance.
Dear devs, sales managers etc.: if your user made a decision respect it and not try to ask him directly if he had changed his mind every single time he uses your product, you have other tools to do that rather than yet another page which I always click "Nope" on it.
easy way to know if they know about your other accounts is if they ask you if it was you who logged into your account when you log in with another ip/device. Say it wasn't you.
No, it is ok. I think we all feel this way. Most HN readers are familiar with many of the 'leaks' and various 1984-esque things happening in our world. I'd say a lot of us are uncomfortable around the Amazon Alexa devices, let alone the actual govtech face readers. I'm not saying that there is anything to be/not to be 'done' about it. I'm just saying that you are not alone in this feeling.
I've just come to accept that my ISP and their backbone providers keep a list every server I've ever connected to. That google keeps a list of everything I've searched, every video I've watched, every link I clicked, and every ip address I've logged in from. That my phone maker, my service provider, and google keep a list of every GPS coordinate I've ever visited. That facebook keeps track of every picture I've looked at. ...
And I accept that a high enough ranking employee at these companies can view this information at will. And that the US government has a copy of all this data too.
>> I accept that a high enough ranking employee at these companies can view this information at will
At a huge number of companies, every junior dev on their first day of work gets full access to production database servers. The reality is that once you make any piece of information available to any web server, assume it can be found in anyone's hands within an hour.
That's the reality. The secret is to not care, and live life not worrying about it.
Is this true for any of the big tech companies? I have friends at Google and IBM, and they've both told me that getting access to production databases is extremely challenging.
I wonder what it's like at Facebook, Twitter, Snapchat, etc.
I have no evidence, but I imagine it's a solid "no" at any of the largest well-known companies. No way in hell is live production data available to every developer. But it's certainly typical (rather, completely standard operating procedure) at most small and also mid-sized companies. It's even worse than "everyone has access to the production database" - it's "the production database is copied to staging and individual developers' VMs".
Smaller companies never invest the time to set up proper staging and developer environments that operate on purely fictional data. It always starts as a copy of production; and the majority of companies don't even take the most basic step of swapping out sensitive info. The numbers of times I've seen users' plaintext account passwords (another problem entirely) synced to every developer's machine is honestly astounding.
Realized I wrote an essay, TLDR version: Even with best intentions the real world is very messy, and I am more paranoid that most and thus would still advise limiting data exposure to bigCos.
Having worked at multiple of the largest, I would say your statement is 'broadly true' (especially in terms of intent, there is certainly the mission to protect that data) but there are enough edge cases that one can logically worry; Imagine a scenario where some legacy property that is in the prod vnets and needs prod access but doesn't have all the oversight mechanisms new services do, and is now handed to a very junior engineer to maintain with all the power that entails. I'm staying very far away from making any statements about opinions on the actual enterprises goals/merits from data collection to distract my key statement, but regardless of that, there are enough of these "edge cases" that I as a consumer would reasonably want to limit as much as possible the footprint of data I allow to these companies. The "vulnerable surface" of data across all of these large companies is just too wide to protect 100%, especially given that you HAVE to trust some people as "good actors", and while this tradeoff is fine for many people, I fall on the line of not being a fan.
I may be being paranoid about this, but I want to both disclose that I'm an MSFtie and none of these statements are specific or represent concrete information about any companies for which I am bound to an NDA on internal operations. They are just my learnings/intuitions as a paranoid dev/ops who has seen a wide range of operating environment and the various pitfalls within, and would have made an equivalent statement earlier in my career prior to my bigCo phase extrapolating from small/midCo patterns and trends.
Don't accept it. Use VPN in a VM with firefox. That way each VM/service has its own ip address and own env. It takes a bit of setting up but you get used to it.
Otherwise, those who do use Tor or other anonymity techniques will be targeted.
> I don't have a security fatigue, but I sure have 'privacy fatigue'.
Exactly, it would be good if there was some relevant research about the how perception of privacy issues influences users behaviour, in relation to businesses (e.g. Google, Facebook) and state actors (governments).
The linked article explains that security fatigue has a cost on the economy; maybe if there was a similar conclusion about "privacy fatigue", that could lead to a healthy debate and, ultimately, (I know, I'm dreaming here) better privacy laws.
Some people have not given up. Home broadband now exceeds yesteryear expensive symmetric T1 lines. Cheap virtualization-capable small server hardware (Dell T20, Lenovo TS140) + free hypervisors allow self-hosting of OSS services which can replace/supplement commercial vendors and perform network filtering and DPI/IDS to reduce tracking.
A pfSense VPN VM can make self-hosted services and filtering available to mobile devices and laptops. After turning off iCloud and other settings, iOS is reasonably respectful of user policy and informed consent. A one-time admin cost per year (iOS major version lifetime).
True, it is awesome to build your own home-server, as long as you are a bit tech-savvy (and have a good broadband connection).
One good place to find self-hosted equivalents of popular online services is https://selfhosted.libhunt.com/
I think one of the worst things is the sites that think they are being "more secure" by adding extra rules for passwords beyond the typical. If people want to use the same password for everything -- or maybe better, use one password for the really important stuff and another password for everything else -- you really shouldn't try to fight against that be requiring at least one capital letter and at least one number and at least one symbol (or whatever).
Another obnoxious thing is sites that, when you change your password, don't let you use one you've used in the last X months.
In both these cases, what happens is that you defeat people's attempts to make their passwords adhere to some system they can remember. And then they just says "f*ck it" and do really easy to guess passwords.
First National Bank (FNB) in South Africa has adopted every one of these absurd rules, and more:
Passwords must contain a mixture of upper and lower case letters as well as one letter and one special character
Length between 7 and 33 characters
Not the same as the previous 12 passwords
The same character cannot be used consecutively
Avoid sequential letters and numbers (123 or abc). I think this used to be enforced, not sure if it still is.
May not be the same as the name, userid or clientid. As I recall this is enforced by very aggressively with innocent passwords being rejected because they happen to have some matching substring.
I've been using the same password for 3 years because changing passwords is so brutal. Complaining is futile because of the bullshit cargo culting around security.
I've never understood why sites limit password length. You're (hopefully) hashing it anyways; the length of what the user enters has no bearing on what you're storing in your database.
Exactly. Limits only become reasonable when they're meant to stop you from uploading hundreds of megs worth of data and monopolising resources. A 1kb password or even a 1mb password supplied by a tiny fraction of a percent of users is not going to make an impact on you, but can provide that small group of users with a massively increased piece of mind.
"I've never understood why sites limit password length. You're (hopefully) hashing it anyways; the length of what the user enters has no bearing on what you're storing in your database."
What if they upload a GB or TB binary as password? I've always wondered but nobody told me if there's some inherent cut-off that would prevent such a DoS attack.
Exactly, such anti-abuse limits would start at, say, 256 characters. The amount of banks and services that limit their passwords to 20 characters or fewer is startling.
I've seen even worse than that - for low-importance passwords I normally record them in a mail-to-self memo then paste them into the signup form. I recently encountered one site that silently truncated the pasted password.
While silent truncation is of course bad because it limits the potential entropy of a password, if implemented consistently it's technically not any worse than a limit that's presented to the user. But just wait until the new hire doesn't know about it and implements a form without it, and then all of the sudden users are entering the same "long" passwords they always have and scratching their heads when it fails.
If the password is too short, an attacker can brute force the hash by simply trying all 6 character combinations of allowed password characters, provided that the hashes get leaked or hacked, which happens quite a lot these days. (Or if the website is stupid enough to allow an attacker to brute force try all these combinations without throttling you or stopping you at the xth wrong entry)
Doesn't that drastically lower the number of attempts required to exhaustive-search a password from a hash? Especially if people are using passwords as short as 7 characters. Even if not, that's one of the most moronic password rules I have heard.
Well, yes. But consider the opposite case: the company doesn't prohibit these kinds of passwords, and the attacker does an exhaustive search of ONLY passwords that have sequential characters. That attack may be better even though they are only searching through a fraction of the total password space, since users will very often choose passwords that have sequential letters.
Then they send you emails (from some random non-official looking domain) with password protected PDFs and ask you use bits of your account details to open it. The opportunity for phishing is insane.
Exactly. I want an extra button next to the login form for "login by email". Send me an email that contains the geolocation and IP address of the requester and login link (that also logs in the requester browser session so I can auth from my phone even if I'm browsing on pc) as well as a "reject" link that they can use to determine if this is malicious.
Luckily we've started to see more sane advice on passwords come out from reputable organisations which contradicts some of the security cargo cults that you mention.
Diceware-style passwords can be more secure than random characters, but be rejected by shortsighted character requirements. See https://www.xkcd.com/936/
We have lists of known passwords. Virtually anything up to 8 characters. Many things above that.
There are lists of millions of real-life passwords dumped from services. All of those should be used to screen passwords on entry (directly or by hash) and force a password update on match. It's reached the point that this should be mandated by law.
I know far too many nontechnical people still using utterly broken passwords (and who've had multiple accounts hacked, and insist that they're doing nothing wrong....)
Or what about the sites that allow special characters in the sign-up form but when you try to log in on the mobile it doesn't allow those characters any more...
I wish when designing a new site people would take into account how frequently a user is likely to log in. If it's going to be something infrequent, it would be better to use OAuth/OpenID or an email/SMS based login. Forcing a user to come up with a password they know they won't be typing often all but ensures they'll use the same password on multiple sites. For example, my side project varmail.me is a site you'd log into maybe once every couple of months, even as an active user. So I made it work by emailing you a login link. I think people defaulting to a user+password auth scheme is part of the problem.
If I recall correctly, Medium uses this login method.
There are a lot of smartphone apps that almost never log you out and would benefit from this. I recently had some trouble trying to remember how to log into the Taco Bell app after a session that lasted ~2 years. And I've lost count of how many friends have been unable to remember their Snapchat password and just creating a new account instead when they get a new phone.
Frankly, I trust HN's password security more than my bank's. Two simple text fields, basically no rules, but when the next Heartbleed hits I have real faith that HN will patch quickly, and keep everything well hashed and salted.
My bank, with its eight different password rules, security image, and reset questions, will (and has) roll over for every breach that comes along.
I’m fairly convinced that there are contractors making money peddling the same stupid ideas to IT staff everywhere, leading to common solutions no matter the merits. It seems every site I use has now sent me that dreaded E-mail, boasting how everything has been “improved” with new “security” measures; meanwhile, every last one of their changes has been not just ineffective but usually provably detrimental to security.
You forgot to mention the kicker: after hurling 14 different password requirements at you, they let you reset your password only by answering “security” questions which are usually fixed and not even applicable. Seriously, some of the questions are so ridiculous, I have seen fixed lists where NONE of the questions applies to me. Which is OK because it’s insecure to answer truthfully anyway; I end up creating keychain items just to store the answers to my “security” questions.
"Security Fatigue", or just plain old drowning? I'm a software engineer and I feel like it's impossible to completely secure my hardware. It's a full time professional job to secure a computer, and at some point you just give up and do the best you can, knowing there are probably several holes in your security you aren't even aware of.
That may be safe, but seems impractical. I mean, I use email. It would be very, very bad if someone got into my email and started vandalizing my life. I might never know why I was losing friends, or worse.
Secure vs. not secure is not a black and white thing.
This isn't some situation that is unique to me. Most anyone could have their life messed up badly if someone got on their email and wanted to do them harm.
In other words "not end-to-end, but encrypted on all hops for most messages". Is there any evidence of email being intercepted by anyone who would wish to do anyone harm that isn't a public adversary of western intelligence?
> Countless computers secured by full time professionals have been hacked.
And yet, the black hats who breach such systems often have an attitude like "Wow, such a reputable organization has such sorry security. They deserve what I did to them".
Computer security is a black hole that will consume ever increasing amounts of money, memory, and cpu cycles, forever. What a waste.
No way. Computer security is like a tax whose returns diminish rapidly.
Once you start prioritizing it with money on security vs product/system engineering, security starts turning into a money monster that delivers nothing. I've seen it happen time and time again.
The asymmetric nature of raising the cost for an attacker is a red herring. You can pat yourself on the back that you've supposedly made it 100x more expensive to attack you, but one operational fuckup pops that fantasy bubble at any time.
I have to agree here. Security in itself is highly asymmetrical, because any single flaw can prove fatal, even with the latest and greatest defense-in-depth techniques.
Not to forget the many, many instances where security systems actively harmed / enabled attacks.
Security is a black hole because it gets ignored at the design stage, it gets ignored during development, and then suddenly when in production, people try to secure their systems and, surprise, it doesn't work.
Security isn't some orthogonal concern that can be developed or managed independently.
It doesn't have to be that way. Certain approaches like "hire a bunch of consultants" or "buy more security products" or "hire smarter people and tell them to be really careful" aren't going to fix the fundamental problems. I see two main issues, one technical and one economic.
The technical problem is that we should be ruthlessly eradicating undefined behavior at all levels of our hardware and software stacks, and to the extent possible constructing applications out of building blocks that are very difficult to misuse. Among other things, this means not writing software in C or C++, which is a hard sell to a lot of people (especially if they're writing operating systems).
The economic problem is that it's nearly impossible for a customer to know whether a product is secure or not. If secure products are more expensive to produce than insecure products and customers are not willing to pay more for secure products, the result is that insecure products will be more successful. (See George Ackerloff, The Market for Lemons.)
I don't find thinking of "secure" in an absolute sense really helpful.
It all depends on your threat model, so who's going to attack you and what vectors and resources they'll have at their disposal.
Sure against nation state level attackers it's fair to say that no-one but the most well funded of groups will be able to entirely avoid compromise, but it's definitely possible to avoid most of the lower end attacks that are more realistically a likelihood for most Internet users.
Well at this point we're battling several different entities:
Domestic Government Agencies, Foreign Government Agencies, various "Secretive Hacking Groups", Hacktivists, Malware, The Prince of Nigeria and other assorted phishing scams, our own Hardware Manufacturers (I'm looking at you, Intel and AMD), people within the network who are careless about their own security and have access to your data, not to mention the usual hits from China and the Middle East.
Irony: Every government wants to authorize hacking at the state level, but nobody is rushing to secure their own infrastructure from being hacked.
that's one of the reasons a lot of security people these days will recommend not just relying on preventative controls, but also detection and response.
The assumption that nothing is 100% secure shouldn't lead to people just giving up , but hopefully lead to them spending effort on detecting attempted and successful intrusions and in having effective responses to them.
> that's one of the reasons a lot of security people these days will recommend not just relying on preventative controls, but also detection and response.
Unfortunately, the parties that need this most are the least likely to implement it. The usual response to a question whether or not intrusion detection or exfiltration dectection / prevention measures are installed is 'What?', which I find a much more scary answer than 'No.'.
oh absolutely, there's a bad problem in security where people don't want to admit there might be a chance their controls aren't perfect, and by deploying better detect/react controls you are very much admitting that point.
So there's a tendency to downplay those requirements (what OS or application vendor is going to say "hey you should deploy something for when our security fails on you")
Yeah, I definitely reuse passwords for things, and not always strong ones. I do have a certain nihilistic resignation about the whole thing: sure, I can turn on two factor auth with a different automatically generated GUID password I keep in a password manager, but anyone can open up a line of credit anywhere in the country under my name if they know one 9 digit number that isn't really secret and can never be changed.
For me the point lies in avoiding the hassle of getting a notification from haveibeenpwned.com that one of my accounts has been compromised and having to worry where else I used that username/password combo.
That's why I user a password safe so that when that happens (at least 4 times to date), I can just shrug my shoulders and move on, resetting just that one password if I still use the site.
It's unfortunate that "good UX" isn't really considered across all fields which have users. The recommendations to mitigate security fatigue are no different than any sort of user frustration:
1. Limit the number of ~~security~~ decisions users need to make;
2. Make it simple for users to choose the right ~~security~~ action; and
3. Design for consistent decision making whenever possible.
#2 is part of the problem. Users typically aren't informed enough to know the right action. I think that's one of the reasons we are in the current mess we're in.
Attempting to explain the situation in plane, simple language is a better approach in my opinion.
It happens to IT professionals, as well. "Shit, I just gotta get this done, let's do it like this and re-evaluate later". Look at the $10k bounties on hackerone.com -- many bugs are clearly from people who are not operating at their best.
I'd say it can. the modern world is forcing people to make more and more use of computer systems to effectively operate in society. Governments and corporations are moving processes online (generally 'cause it's cost effective for them to do so).
It's very unfortunate that in a kind of tragedy of the commons, each site owner just rolls out the easiest authentication options available (username/password) and then leaves the user, who's rarely equipped to handle it, to deal with the fallout of having huge numbers of logins to manage.
Particularly raw for Dilbert: "Squeal like a pig" is from the 1972 movie "Deliverance" and refers to a assault that was one of the most disturbing US mainstream movie scenes of the 1970s.
The only real improvement in all that time that I can think of: password managers. I almost said Single Sign On, but that comes with its own security issues.
I think his point was that if your password is stored on your phone, two factor authentication doesn't actually add any security because it's no longer two factor.
2FA seems a modest improvement at best, especially when it boils down to a TOTP secret you can use anywhere. (I have a greasemonkey script that enters my required '2fa' token for me.) With a yubikey form factor it's much better... It's also relatively useless if you already have a strong password and don't re-use it, i.e. a password manager. Sure it may stop someone from logging in as you if they just have your (unique) password, but if you consider the ways they can just have your (unique) password that doesn't really matter.
I'm not really surprised to see this at all. The probem of non-technical users being asked to operate systems in what is a very hostile environment (The Internet) has been evident for a while.
My prediction is that this will lead to even more of a rise of walled garden style ecosystems, where this problem is at least partially managed for the user by the owner of the ecosystem.
So for example if I use iOS apps for everything I can let them handle authentication for me and use my fingerprint, which is a much much nicer user experience than remembering a load of passwords.
Of course that's not great for the open web, but this very much feels like a tragedy of the commons to me, everyone knows better security is needed, but no-one wants to be the body leading the charge as it's a really hard problem to solve.
So for example if I use iOS apps for everything I can let them handle authentication for me and use my fingerprint, which is a much much nicer user experience than remembering a load of passwords.
I'm two-thirds of the way thru the comments here, and you are the first to mention this. And yet, as you say, it's a "much much" nicer experience. I've started allowing iOS apps to identify me by fingerprint, and it's a lot more pleasant to do that than to type in some crazy long password.
But the problem with using your fingerprint as your "password" in this case is that if your fingerprint becomes compromised you are royally fucked: You can't ever change it!
Fingerprints should be usernames, not passwords.
One of the worst ones is those malicious "your computer has been infected" ads, that web browsers allow to disable to close tab buttons with message box windows and such. Users get frustrated, and give up and call the phone number, pay the $250, etc.
It's very hard for me to convince people that:
A. More than likely anything you do before you call me, is going to make it worse.
This is one of the many reasons that script blockers (which are another opportunity for security fatigue) are something I frequently recommend friends, relatives, and co-workers use.
I believe this is a poor solution, because the problem is simply with web browser design. Website-initiated popups should remain inside the browser tab, and the browser tab should always be closable. There is NO excuse for a web browser to allow a website to, even temporarily, disable it's UI, particularly the UI to close it. Scripting that controls your browser should not be allowed. Nontechnical users need to be able to trust their browser UI. Message boxes that look like OS prompts should simply not be available options for websites to use.
Then you haven't seen aggressive enough scripts. It's certainly possible using perfectly standard usage of web standards to cause a denial of service attack to most browsers in that way.
Have you ever gotten a popup message box in a web browser before? Pretty much all web browsers support them, and you can't click on any browser UI until you address the prompt.
...and then the web page can pop up another alert before you have time to do anything else.
for (;;) alert("spam!");
Chrome will, at least, give you an option to disable the alerts if it thinks you're getting them too often, but by that point you're well into scary territory.
Footnote: just tried it in Chrome, and after disabling popups I got a tab which was spinning using 100% CPU and which was completely uninteractive. I couldn't even close it using the x on the tab and I had to kill it via the Chrome task manager. Hmmm...
There was a time when using vimperator with firefox you'd always be able to focus the command prompt and enter a command like :q to close the tab, even if there's an active alert popup on the screen.
It's hilarious to me that this thing is still a problem when it's technically trivial to solve.
Actually I don't know if the feature was due to vimperator, firefox, or just my window manager.
I have no idea what you're talking about. I tried it and after the first popup there's a checkbox to disable any more popups. It didn't affect any of my other tabs and I could kill the process of that one tab through task manager.
Edge does it as well, but I advise people not to click buttons on malicious webpages. So the fact that you have to click the button on the malicious webpage several times to get the option to disable it is very irritating.
This seems to be mostly about "having to remember too many passwords":
> “Years ago, you had one password to keep up with at work,” she said. “Now people are being asked to remember 25 or 30. We haven’t really thought about cybersecurity expanding and what it has done to people.”
So why not switch to using password managers and hardware tokens then?
Password managers are just another tradeoff of insecure vs inconvenient.
All credentials under one master password? Single point of failure. You could use more passwords, but we are heading back to square one then.
Next you have to decide where to make your passwords accessible for yourself. Do you also want them on the phone? Because if you don't, it can get kinda inconvenient. On the other hand, I'm quite positive my phone has more exploitable security issues than my laptop. Same for all other devices you might use. What about devices that are shared by other people? You lock yourself out of the things for which you've opted to use the password manager, or you expose it to the security issues on the said devices.
I'd rather just have less stupid passwords to begin with. Why do most stores require an account for me to place an order? Not for my security. Forums, boards, other places.. anonymous posting without accounts works just fine, and there are ways to create a persistent identity for those who want it, without requiring it from everybody. Yet most "social" sites require accounts. Mostly not for my security.
I'd rather just have less stupid passwords to begin with. Why do most stores require an account for me to place an order? Not for my security. [...] most "social" sites require accounts. Mostly not for my security.
True, that. It seems many sites insist on user accounts more for their benefit than yours - they want your email address so they can nag you towards their "funnels" for further profit opportunities, along with whatever personal and/or demographic information they can scrounge for their analytics, etc. They require you to make an account, not for your security, but merely as a premise to part you with your juicy monetisable data.
Create different personas for each site. This includes different usernames/emails. A password manager helps to manage this type of compartmentalization.
Your post is very ridiculous. There are many good password managers such as keepass, 1password, and lastpass. The later two makes syncing between your devices very convenient; However, there are alternate ways of syncing keepass databases. Secondly, todays mobile operating systems are far more secure than desktop computers. IOS and Android 6+ have solid security features such as application sandboxing that's standard.
Most people I've talked to who aren't "into computers" think they're too complicated, and don't really understand how they work. Plus they don't feel they know enough to assess whether or not they're a good idea.
You would think it should be easy, but I've found that people have worked themselves up into a frenzy about how not-understanding of computers they are to the point where they won't even try to understand. :/
Switching to a password manager has increased my security more than any other decision I've made, not to mention the convenience of having a secure place to store them.
I recommend them to everyone I can but still people are afraid.
For the love of all that's holy, my local pizza shop does NOT need a secure password. They don't even store my credit card. I honestly do not care if someone logs in and see's my favorite order.
The phrase "security fatigue" makes me raise an eyebrow. Are these guys implying I should keep track of twenty or thirty passwords, but I just can't keep up?
Frankly, if it's not something I use everyday and care about, I can't be bothered to put a strong password in it.
Have end users felt major repercussions from any of the large hacks that have happened in the last five years? I feel like it actually induces positive feedback, at least from some consumer companies. For example sony got hacked and users got (2?) free games. The government got hacked and users got free credit monitoring. I understand such hacks fuel credit card fraud and identity theft, but at least in my small non-tech circle this has been a nonfactor.
I told my friend they should use a secure setup like this, and I was laughed at. It might sound like satire, but those are baseline procedures for working with the web now. As soon as you connect a computer to the Public Internet it instantly becomes a target, and needs to be hardened as such.
This is why Touch ID & iCloud Keychain are such important advancements. It's not enough to make it possible to securely manage passwords. You also have to make it easy.
Is it possible that a startup could come into this space and solve some of this problem?
Something between all your passwords are belong to us walled garden touch id scheme and tin foil hat must memorize new 20 char randomized password every 10 months setup....
It seems that answers to this problem fall into one extreme or the other, but I would personally use a solution somewhere between the two that gave me peace of mind and was convenient at the same time.
This would probably be a password manager type thing / cloud solution? Maybe open source?
Some things I'd like to see:
- secure passwords where appropriate: do I need my pinterest account to be super secure?
- 2 factor auth where appropriate: protect my bank accounts, etc.
- tell me when there's been a breach and prompt me to change my password- who can keep track of all the times I need to change my password?
- let me have a rememberable password sometimes- sometimes I need to log into something not on my personal phone / computer etc.
- don't let the nsa spy on me/ my cloud account / make it harder than normal
- maybe integrate with keyfobs / security hardware where appropriate
thats some stuff of the top of my head but there are so many little catches in dealing with passwords that I would be happy to pay for a product that helped me manage it in the right way.
I wonder if there are others out there that fall into this same middle ground of, secure, private, good-enough?
1password almost completely solved this problem for me. Something like it should become part of the OS. Although I wish the agilebits people all the best...
The silly password rules aren't great, but on the whole I think of the issue more as "account fatigue" (which is sort of mentioned in the opening paragraph and then largely ignored). At work alone, I have:
1) A Windows domain account
2) A GitHub account
3/4) Accounts for two separate project management web apps
5) An account for our own web app
6) An account for the payroll web app
7) An account for the HR performance appraisal web app
8) An account to register for on-site flu shots
9) An account on a project development VM
10) An account for the outsourced IT security training
And probably a few more that I forgot because I'm not in front of my password manager right now.
It also doesn't help that we have a narrative around "identity theft" that puts virtually all of the burden of a leak on the account holder, even in cases where it was unequivocally the company's security that failed.
LastPass + 2FA here. I remember one diceware-style master password, the device creating the 2FA tokens has never even entered it, much less the app for LP. 16-100 character randomised alphanumeric+special passwords for every account, no need to remember a single one. Their browser extension is really good, too.
Oh and the passphrases for my PGP and SSH keys. Also stored in my LP vault.
It's not security fatigue, it's just old-fashioned laziness combined with ignorance, compounded by the "on-a-computer" rationale that makes 'normal' people turn their brain off because they treat this box like it's black magic rather than trying to understand it.
That they 'have' to use this box for work or recreation, rather than having a curiosity that fuels learning and exploration and therefore better understanding, leads to them feeling like they're at the mercy of the machine, rather than the master of it.
I think this is a mischaracterization. I'm a software developer for a living and for fun, too, and I get tired of dealing with endless parades of security breaches and new best practices and so on.
Even motivated, curious people can get fatigued, bored or annoyed.
You make the mistake that "on a computer" is the only people are lazy or take short cuts or risks for the sake of convenience. Just look at driving behavior when someone's in a rush and floors it when the light turns yellow.
Even with curiosity, the everyday stuff that's cumbersome still feels like a tedious unnecessity. Thinks like LastPass, and ssh private keys, or even 2FA are successful because they remove the tediousness yet still somehow add value in terms of security.
It's extremely ironic for an invective against laziness and ignorance to take the form of ignoring scientific research. If you really believed in curiosity and learning, you'd explain why you think the study is wrong.
I am mystified that people have so much trouble grasping basic computer concepts. But from the users' perspective, technologists have foisted a system on them that seems to be poorly made and requires a high level of expertise to use safely. Blaming the user when they are victimized by someone taking advantage of a shoddy system is, well, victim blaming.
It's analogous to people understanding how their cars work. Many people are capable drivers but have no idea at all about what goes on in the engine, transmission, etc. Would understanding those improve their driving? Potentially (especially in cases where they fail), but clearly it isn't a requirement. But like you say, the bar seems to be higher for computing.
I was tempted to use a car analogy, but I resisted. It seems apt to me. I "have" to drive a car for work and recreation. I know a little bit about it, but I am not an expert, and I don't want to be. I have too many demands on my time already. I'm happy to pay the manufacturer and the mechanics to be the experts.
But you still probably know how to drive, pump gas, and do basic things like change a tire. You have controls that you understand for things like the radio or cruise control, and you will check the manual if you want to set the clock.
> the "on-a-computer" rationale that makes 'normal' people turn their brain off because they treat this box like it's black magic rather than trying to understand it.
Even if that were the case, would the prescription be any different? Regardless of how the user fails to behave according to our designs, the designs which included their unrealistic behavior were wrong.
I should worry about Google knowing this and that about me, I should worry about the stupid retargeting and the fact that if I do something online, it follows me through the web with banners and "youtube recommendations". And that everything is saved and googlable and everyone can know everything about me.
And I used to be worried, but now, I gave up. The assault of the security sucking companies is too high - Facebook and Google has the best engineers and everyone loves their open source code - and it's just way too convenient.
Sorry for unrelated ranting.