Danish designer Bjorn Karmann has invented a camera that not only lacks a lens but doesn’t use light to create images.
The vast majority of human beings probably haven’t used cameras without lenses since the first camera with a lens was invented in 1816. Before that, however, we had pinhole cameras which make use of the camera obscura effect dated back as far as 500 BCE. While there are still a lot of people who practice pinhole photography and there’s even a world pinhole photography day, a camera without a lens is basically useless to a photographer.
Without a lens, you can’t focus the light onto your film or image sensor and all your photographs are going to be about white light. That being said, however, Danish designer Bjorn Karmann has invented a camera that not only lacks a lens but doesn’t use light to create images.
Introducing – Paragraphica! 📡📷
A camera that takes photos using location data. It describes the place you are at and then converts it into an AI-generated “photo”.
— Bjørn Karmann (@BjoernKarmann) May 30, 2023
Painting a picture with words
The Paragraphica is a camera unlike any other. While it looks like a plastic toy camera with 3 giant dials and a giant plastic spider where the lens should be, don’t be fooled, it’s a real camera. Unlike a “conventional” camera that uses light, however, the Paragraphica uses open APIs that have been trained on mind-boggling amounts of data, to collect information about your location, your surroundings, and anything else of interest that needs to go in the picture.
It also has a GPS module to collect information on the weather to depict your surroundings more accurately. All the data collected is then typed out in the form of a paragraph (hence the name).
A typical paragraph looks like “An afternoon photo taken in Worli, Mumbai. The weather is slightly cloudy with chances of rain and a temperature of 27 degrees Celsius. The date is Saturday the 26th of June 2023. There is a school, a bank, and a mall nearby.” This paragraph is then converted into an image using text-to-image AI. Text-to-image comes under the category of generative AI which uses unsupervised learning and can generate various types of original content like text, speech, images, music, and more.
The text-to-image generator is run by the camera’s Raspberry Pi 4 Model B processor which then creates an image on a touchscreen LCD fitted to the back of the camera. The LCD touchscreen also displays the paragraph before it is converted to an image while also working as a viewfinder.
Through the eyes of a machine
In addition to the three dials we mentioned earlier, there is a red “shutter” button to click pictures with. The first dial is used to select the radius of the surrounding area, the second one is to control noise, and the third one is for the guidance scale (sort of like focus). Now if you’re thinking this would be a great hidden camera or could be used for some sort of surveillance since it has no lens and would be hard to detect, it’s a bit more complicated than that. Just like a bat uses sonar to “see” its surroundings even though it’s blind, the Paragraphica offers you a different view of the same image in front of you (if it were taken from a regular camera).
In fact, the giant plastic spider on the camera is an homage to an animal called the star-nosed mole that is blind but uses over 25,000 sensors on the tentacles on its nose to navigate around its underground burrows. Now if you think about the image a bat or a star-nosed mole sees in its head, it’s going to be completely different from the image a human sees from the same angle. Similarly, the Paragraphica offers you a completely different take on the scenery in front of you.
Like the 25,000 receptors on a star-nosed mole’s nose, this camera uses geolocation data, APIs, and other sensors to “describe” your world and then give you an image of a machine’s perception of what that description translates to.
A camera that can “create” pictures
While some argue that since we’ve trained it with billions of human photographs, what it’s creating is basically what it thinks we want to see, it’s still impressive that it can “think” at all and then create the world it thinks we’re in, from scratch. The other cool thing about a camera that relies on data and not the actual reality that we live in is that data can be manipulated. The question that begs to be asked then is “Can we manipulate that data to click impossible pictures?”
Could you feed it data to make it believe it’s 1945, 1748, or 1600 to take pictures of the past and look back in time exactly where you are standing right now? Could you take a picture of the future? Could you take a picture of another planet? The possibilities are pretty endless with AI and maybe one day it will be able to tell us (or show us) exactly what the future looks like.