Good news: in 2015, device makers, OS providers and authentication solution providers all picked up their momentum on initiatives tackling user authentication challenges.
Cases in point: the support of fingerprint sensors in Google Android M, the proliferation of Apple Touch ID supporting solutions, Microsoft Windows 10 multi-method biometric support, Samsung’s fingerprint enabled devices, and the implementation and deployment of solutions based on the FIDO alliance authentication standards.
While all these trends help improve the user convenience aspect of the authentication process, the reality is that it will take many years before a good majority of devices (laptops, smartphones, tablets and phablets) can fully take advantage of such rich authentication features. Until then, we’ll be living in a heterogeneous world of devices.
We’ll need to deal with a wide-ranging collection of user-auth methods across a wider range of device types and diverse operating systems.
In addition, while from a user convenience perspective, some of the new assurance methods are definitely better than complex & lengthy passwords; they don’t necessarily match or exceed the strength associated with complex and lengthy passwords.
For operations that require high levels of assurance, User-identity assurance solutions will need to mix and match methods of authentication to arrive at the desired level of assurance and strength.
In addition, user identity assurance solutions need to take the ‘what-if’ scenarios into account:
- What if the user loses their primary authentication device?
- What if the user environment is not conducive to the primary method of authentication?
- What if the user environment is conductive to multiple methods and the user wants to be able to choose the preferred method(s)?
- And what if the credentials associated with the user’s primary method of authentication has been compromised?
These are the reasons why relying on a single method of user-verification, or exclusively depending on a rigid and static authentication policy that works on a specific sub-set of devices, are just plain, bad ideas.
Instead, assurance solutions should dynamically detect the user’s device capabilities, match those against the service provider’s (or enterprise’s) preferences and acceptable authenticators (in certain cases, assurance solutions need to find out which of those authenticators meet the service provider’s certification or regulatory requirements), proactively determine where the user is, and if what the user is about to do constitutes ‘normal’ behavior.
Identity assurance solutions would ideally give users choices in terms of how they want to be authenticated, and ultimately, make sure the end result matches or /exceeds the service providers expected level of assurance, which is usually tied to the sensitivity of the application or the action in question.
Most importantly, solutions need to make all of “this” as seamless and as invisible from the user as humanly possible.
In fact, as part of RSA Via’s patented mobile authentication features, we recently made it easier to define dynamic user authentication schemes, taking into account user device capabilities, risk and the sensitivity of the application or resources the user attempts to access. And we introduced new choices for authentication (mobile biometric using eye-prints, and the support of FIDO U2F hardware tokens).
These all boil down to offering smart choices for user identification, helping to greatly simplify the identity administrator’s policy management tasks, and making the end user’s authentication experience smoother, without compromising security.
In my next blog on the topic, I’ll describe how to achieve the right “levels of assurance”, in more detail.
Special thanks to my colleagues Salah Machani and Alison Raymond Walsh for helping on this blog.