The DxO One is an interesting camera. It’s designed to be used plugged in to an iPhone, not as a standalone camera:
(Credit: The Verge)
For example, it doesn’t have a screen or a viewfinder . You instead use the iPhone’s  screen as a viewfinder, with the excellent DxO app to control the camera. And what a viewfinder the iPhone is. The screen is huge, at 4.7 or 5.5 inches (compared to the tiny 3-inch camera screens). This may not seem like a huge difference, but remember that the screen area is proportional to the square of the diagonal, so we’re talking about a difference between 22 and 9. In other words, the iPhone’s screen has more than twice the area of camera screens. And the iPhone’s screen has a high resolution (326 pixels per inch). The combination of a huge size and high resolution means that you can see a lot more detail than camera screens.
In comparison with the iPhone, I’m essentially shooting blind when I use my mirrorless camera’s screen. Only when I copy the photos to my PC do I see which ones came out well and which didn’t, at which point it’s too late to try the shot again, perhaps from a different position, or with different framing, or with a different subject. I’ve missed the shot. And an opportunity to improve my skills by trying various options to see what works. The iPhone’s screen is a boon by comparison.
The iPhone also has 100% sRGB coverage. And a touch screen , so I can tap to meter and focus. Touch UIs are also far more intuitive than standalone camera UIs, which seem to be stuck in the 1990s command-line era, with unlabelled buttons like Fn. It’s anybody’s guess what that does. Instead, with a touch screen, you can show only the actions that make sense in a given context, and label them so that the user knows what he’s doing. Standalone cameras also hide things in menus, which are not discoverable. Touch UIs are an order of magnitude more usable.
Removing the screen makes the One small enough to fit in a jeans pocket . It’s just 1-inch-thick, which no other camera with a 1 inch sensor manages. The RX100, for example, is 1.4 inches thick. I’ve found that it doesn’t fit comfortably in my jeans pocket. I have to shove it in, and there’s space for nothing else in my pocket, and it presses uncomfortably against my skin on the inside and the pocket on the outside, even when I’m standing, to say nothing of when I sit. And it’s hard to take out once you put it in. The RX100 isn’t really pocketable. A 1-inch-thick camera, on the other hand, like the One, is genuinely pocketable. So, by getting rid of the screen, you now have a high-quality camera (with a 1-inch sensor) that fits in your jeans pocket.
A camera connected to a phone can use the phone’s location services — GPS, cell- and Wifi-based location — for geotagging. Standalone cameras usually don’t have GPS. Even if they do, they don’t have cell- and Wifi-based location, which are needed indoors, since GPS doesn’t work. Which means that your photo collection is an untagged mess that you have to sort by yourself. Why should I create a San Francisco folder when the tags should be able to show you photos from San Francisco? And explore various areas of the city to see what photos I took there. It makes no sense to make the user waste time and deal with the complexity of manually organising photos if the photos can have enough metadata for them to self-organise. Standalone cameras fail badly here.
Things like burst mode can also work better with companion cameras. iPhones have fast storage that you can write at 80MB/s. Few SD cards are this fast. Even a high-end UHS-3 card, which most people don’t have, is rated only for 30MB/s. So, using the iPhone’s storage means that you can take a burst at a high frame rate and for a long duration. As if fast storage is not enough, iPhones have lot of memory, up to 2GB, so they can buffer many of them in memory, for even faster burst shooting. A companion camera could presumably do away with its own storage, reducing cost and increasing performance.
The iPhone also has plenty of CPU and GPU power, certainly compared to standalone cameras. This can be used to improve the quality of photos in many ways, which fall under the umbrella of computational photography. For example, if you’re using the iPhone’s built-in camera, when you press the shutter button, the iPhone takes multiple photos and fuses them together to reduce blur. If you’re photographing a group of people, and one person is moving in one frame and another person in another, Apple claims that the iPhone can pick each person from the photo in which they were steady, and combine them to result in a photo where everyone is clear. As another example, when it’s dark, the iPhone combines multiple frames to reduce noise. Third-party apps like Cortex Camera take this idea further, by taking dozens of photos over several seconds, and fusing them to produce a single, low-noise photo. All without the need for a tripod — you can hold the phone with your hand. The software compensates for slight movements between frames.
All these techniques can be used by companion cameras, by relying on the iPhone’s CPU and GPU power. Like multiframe noise reduction and blur reduction, or Cortex Camera-like long handheld exposures. This is impressive because long exposures usually require a tripod. Doing it in software means that I don’t have to carry a tripod with me. Few cameras, even otherwise high-end ones like mirrorless cameras with APS-C sensors, can do handheld long exposures . As another example, maybe a companion camera can do live exposure fusion or HDR in the preview?
The other advantage of a companion camera is third-party apps. DxO could have enabled an ecosystem of apps to be built that add new features to the One. Like exposure fusion, HDR, focus bracketing and so on.
Apps are also easy to use, because they compartmentalise the features of the phone into discrete, manageable chunks. For example, you could have a Video app, that has all the controls and settings for videos. As opposed to my Sony mirrorless camera, that mixes together the video mode with the photo mode, and the settings for both as well. Imagine if all the features in all the apps on your phone were mixed together. It would result in a complex, unusable mess. That’s exactly what standalone cameras have, while companion cameras can make use of apps to both enable open-ended functionality to be added, while keeping everything compartmentalised and easy to use.
A companion camera can also use the phone’s flash, instead of having its own. The iPhone has a True Tone flash, consisting of two LEDs of different colors, which it uses to match the color temperature of the scene. This makes the flash and its subject blend in, rather than have an ugly white cast in a scene lit by yellow light, say, as with most other flashes. A companion camera can use the True Tone flash.
A companion camera can use apps like Google Drive or OneDrive to automatically upload your photos to the cloud when you’re on Wifi. This serves as a backup, in case your phone is lost or stolen or it breaks. And as a way to transfer the photos to your PC for sorting and editing. Some standalone cameras have Wifi, but that adds to the cost, and in any case, they don’t have Google Drive or OneDrive apps to automatically backup and transfer data to your PC.
iPhones also use a journaled filesystem, which reduces the chance of data loss. Standalone cameras use filesystems that are not journaled, and can lost data. This is not theoretical: my SD card became corrupted when I was in Bali. This resulted in my losing photos, and being unable to take more photos, because the card was corrupt . It’s a shame that standalone cameras are so unreliable. Companion cameras fix this.
Companion cameras can also rely on the phone’s video capabilities unless they can do better. My $1000+ mirrorless camera, for example, takes worse video than my iPhone. This is a shame, in a standalone camera. But in a companion camera, one could just rely on the phone’s video capabilities. The iPhone 6s, for example, sets a high standard: Ultra HD, 1080p at 60 and 120FPS, and 720p at 240FPS. And the resulting video looks good; it’s not just about specs for the sake of specs. A companion camera can just rely on the phone for great video, unless it can do a better job.
In summary, companion cameras can save cost by eliminating a lot of components: screen, viewfinder, GPS, flash for night photography, SD cards, a lot of memory for bursts, and video capabilities (unless they are better than the phone). This makes them cheaper, and small enough to fit in a jeans pocket, while having large sensors. And for each of the components they can skip, they can use the phone’s superior alternative. So it’s not just reducing cost, but doing things better.
Companion cameras can also do computational photography using the phone’s powerful CPU and GPU, such as multi-frame noise reduction and blur reduction, long handheld exposures, and live exposure fusion and HDR. They can support third-party apps to add capabilities like exposure fusion and timelapse, and to have a lot of functionality while remaining easy to use. In particular, apps like Google Drive and OneDrive can continuously back up your photos to the cloud, and transfer them to your PC. Finally the journaled filesystem on phones keep your photos safe, as opposed to the non-journaled, unreliable filesystem on SD cards.
For everyone who wants more than their phone’s camera can do, companion cameras have unique advantages over standalone cameras. I think it’s just a matter of time before standalone cameras as they exist today are viewed as crappy cameras, with poor screens, no GPS, no indoor location tagging, slow bursts due to slow SD cards, lack of apps to extend the functionality of the camera, poor single-color flashes, poor video, and dumb software that’s hard to use.
 You can use it standalone, but you won’t be able to see what you’re shooting. So, while standalone use is possible, the camera is not designed for that.
 It doesn’t work with Android — only with iPhones.
 Again, it’s not about having a touchscreen as a checklist item. My Sony mirrorless camera has one, but it’s a crappy resistive touch screen. You have to press so hard to get it to register the touch that the camera shakes, resulting in a blurred photo.
 I don’t know about skinny and tight girls’ jeans.
 My Sony mirrorless camera has a multiframe noise reduction mode, but it produces blurry results, worse than a single frame by itself. Maybe it doesn’t compensate for slight camera movement when handheld. In which case, what’s the point of having this feature at all? If you have a tripod or other means of keeping your camera steady, you can use a long exposure to reduce noise. The whole point of multiframe noise reduction is that you can use it handheld, but Sony’s dumb implementation snatches defeat out of the jaws of victory.
 Or would have, if I didn’t have a laptop with me to salvage what I could and then format the card.