Last year I spent some time on an iOS UIViewController subclass that can be used as a replacement for the common PIN or password/login screens we often see on mobile apps. It is inspired by the Windows 8 Picture Password feature.
Today, I’ve been talking to some fellow instructors at Xamarin University about designing good touch interfaces and that reminded my of my project. So I thought, I’d create a small blog post about it. It’s all free and available on Github. Read here, if you are interested in the details.
When I first saw the picture password option on Windows, I thought it was a really nice alternative to the usual PINs and password screens. Surely, somebody must have implemented something like that for iOS already…but I did not find anything.
When I played with Windows picture password I noticed that they would only allow specific gestures (connections between two locations, taps and circles). I chose a different approach and wanted to allow any kind of drawing. Microsoft argued that users would fail to recreate arbitrary gestures correctly. I think I found a good compromise with my solution to get it to work. Yes, you have to be precise – but that’s really the idea, right? We don’t want others to easily guess our picture password.
How it works
The code defines a grid rect size which it uses to rasterize the continuous path drawn by the user. By default this is set to 30×30 units. This is used to reduce the amount of location points of the user input. The images below show the original gesture.
Notice that there are three parts of the complete gesture:
- The outlines of the left rock
- A tap on the middle rock
- A vertical line through the third rock
When the gesture is finished, each point of it is evaluated and a square is added around it. The size of this square is 30×30 by default. As soon as a point’s square does not intersect with another square, the center of the square will be remembered (“rasterization process”). This reduces the number of stored points drastically while creating a dynamic raster. Note that even though the squares are 30×30 the complete image is not divided into 30×30 tiles.
When the user has to verify the entered gesture, all of the points of the rasterization process will be used as the center of so called verification squares. Verification is done by an implementation of IKSPictureLoginGestureVerifier. The verification square’s size is bigger than the 30×30 used to apply the raster. By default it is 45×45. This allows the user to be a bit inaccurate when verifying his gestures. The image below visualizes the rasterized gesture paths and the verification squares.
To match a complete gesture the following checks are applied:
- Each part of the gesture must be matched with a specific percentage. In the example above this means the path around the left rock, the tap on the middle rock and the vertical line through the right rock. If part and two are matched 100% but part three isn’t matched at all, the complete gesture is not matched.
- For tap gestures, the tapped location must be inside the verification rect around the stored raftered location.
- Path gestures are treated like tap gestures in terms of “is a point a match or not”. However, in addition a minimum percentage is checked. The user has (by default) to match at least 70% of the original locations.
- To prevent “hackers” from simply trying to hit every possible point on the screen (which would give them a 100% hit ratio), a maximum overflow factor is defined. By default this is set to 140%. This means if the amount of verification points is 40% higher than the amount of original locations, the gesture will not be verified, even if the required percentage of 70% was reached.
It’s a bit over engineered 🙂 But I used it as a training.
- There is a factory interface ( IKSPictureLoginFactory) which is implemented by a default factory. This allows customization of all relevant parts.
- The gesture verifier is implementing an interface (IKSPictureLoginGestureVerifier), so you can provide your own verifier using the factory.
- The main view can be created through the factory if you provide a subclass of KSPictureLoginView.
- The view that is responsible for displaying the users input is the drawing view. You can provide your own KSPictureLoginDrawingView.
- Gestures are created via a generic method. For own gestures, create subclasses of KSPictureLoginGesture.
- If you provide your own drawing view, you can also specify a custom visualization layer. The default layer is a particle layer that looks like as if you are drawing with fire. Thanks to Ray Wenderlich for the native ObjC version which inspired me here. Change it by providing a IKSPictureLoginUserInputVisualizationLayer.
The rep on Github contains the login controller and a complete demo implementation. It allows you to create and verify a gesture. In addition you can visualize the stored gesture in a debug view. A small gimmick is the replay of the entered gestures which utilizes a subclass of KSPictureLoginController to animate the drawing of the entered path using a CABasicAnimation .
Here’s a list of things I’m not really happy with but I do not have time to address them:
- The performance on the iOS Simulator is awful. For some reason with iOS 7.1 it became really slow and one can hardly draw a path. This was much better with previous versions. I encourage you to try the code on a real device. I’m using it here on an iPhone 5 and an iPad Air and it is all super fast.
- The XML serialization is a bit slow. I think it takes a bit too long to convert the gesture points. But you can hook in your own storage easily.
- The gestures are not encrypted or protected in any way. So please do not use this login type for high secure applications! Somebody might just grab the gesture XML from your device or replace it. You should at least enable iOS data protection and don’t store the resulting file in the documents folder.
- The code is prepared to support multi finger gestures and multi tap gestures as well as directional gestures but I have never finished these features.