How tags & smart search could change the way we organize personal data
From early days of the personal computers until the invention of the iPhone, the way we handle our personal data remained largely unchanged: A system of directories and numerous levels of sub-directories commonly referred to as the ’File System’. The file system features a clear hierarchy. It is 100% formal, 100% organized. A system designed for software to store its data. After that people started to use the existing file system to store their personal data as well. Which leaves us with this: The file system was designed for software, not humans.
The file system was designed for software, not humans.
That is why most peoples file systems are in a state of chaos. We are evidently not capable of maintaining a strictly formal system. In the physical world, after using something, we usually put it somewhere and then find it later.
But in the file system, data often gets lost in deeply nested sub-directories.
The PC operates in two layers:
b) The file system
Software creates & edits files, then you store it elsewhere in the file system. If you want to edit a file, you look it up in the file system.
Apple recognized the weakness of the file system. So with the iPhone, they unified the software and the file system. There are only apps. Apps create files and store them internally. If there’s something to do, you don’t have to look up the file, just open the app.
By doing so, Apple solved one problem but created another: Files are now trapped in individual apps. Neither can you edit files across apps, nor is there a way to easily overview your data across apps.
Recently, Daniel Abernathy wrote a brilliant piece on how tagging will replace the photo-album. He called out for two major principles: Semi-Automatic Tagging and Smart Searching. He recognized that organizing data should be done behind the scenes. Semi-automatic tagging works by automatically analyzing the content of a photo (via location, time, OCR, face-recognition, …) and attaching an array of tags to describe the photo. In a second step you can then search/organize your photos in natural language using a Graph-Search like engine.
I believe that these principles can be applied not only to photo-albums, but to the entire file system. You’ll still access data through an app, which then stores the data in a central directory. The app automatically attaches tags, describing the content of the file, specifications, the context, access permissions etc…
As all apps use the same directory, so one file can be edited by multiple apps.
Data would be organized by tags. Tags are added automatically by both the respective app and by a system background-agent that maps out relations between files by analyzing content, tags and usage patterns.
This way the system would automatically organize groups of related files and you could easily find data with a Graph-Search like engine. Search also opens the possibility to integrate services like Spotify into the file-system UX.
Say, you’re working on a project about the war in Syria. The project includes multiple types of data such as documents, photos and videos and involves multiple apps. By analyzing the tags, content and usage pattern, the system could map out context and relation of all files involved in the project. Just search for “Syria War” and all your files show up. Automatically organized by tags.