Every year at Adobe’s annual conference, Adobe MAX, the creative software company shows off new technology and apps that it’s working on.
Adobe makes it clear that these are purely prototypes in development, which may or may not become real features in apps such as Photoshop and XD – or brand new tools. But over the last few years about 50 percent of the technology shown at Sneaks are in apps you can download from Creative Cloud, have become brand new apps. Check out what Adobe announced at last year’s Sneaks event.
This year a significant amount of developments have been powered by Adobe’s AI and machine learning technology, Adobe Sensei. Adobe describes the technology as a way to enhance and speed up your creative process – by removing the mundane tasks – rather than posing a threat to the artistic craft, or replacing an artist’s job.
We begin by delving a little more in depth into four new prototypes – data visualisation tool Lincoln, a tool for colourising photos called Scribbler, the object-removing Cloak and colour-palette creating Project Playful Palette.
At this year’s Adobe Max in Las Vegas, Adobe Sneaks have been presented by the company’s Paul Trani and comedian Kumail Nanjiani, who plays Dinesh on the HBO series Silicon Valley.
Adobe says one the areas it doesn’t have good tools is in the realm of data visualisation. This is where Lincoln comes in. You can design professional graphs within minutes, democratising the role of a data journalist and graphic designers.
The prototype aims to make it easier for designers to create data visualisations, graphs and infographics in a world where it is becoming a more valued skilled – just take a look at the infographics in The New York Times or The Guardian. Currently creating graphs is a strange choice of either using Excel, sketching your own (which is time consuming and can cause inaccuracies) or learning how to code. Lincoln wants graphic designers to create charts without having to be coders, so it’s working on this data explanatory tool. Instead of working with data first and then creating the visual, Adobe says this tool lets you start with any kind of visual, and then input the data to create the graph, so “a nugget of information can be illustrated clearly.”
With Lincoln you can start with a sketch and then combine it with data from a spreadsheet using a data variable palette, adjusting properties such as scale, colour and text. Lincoln should work alongside other Adobe sketching tools. You can create more than one graph using the repeat grid tool from Adobe XD also. You can add text, add images from the CC libraries, have control over the axis, and even switch out the data but the graphics you’ve created will respond with the new data you input. See the difference between these two graphs, for an example.
The graphs can be as simple or complex as you want (as seen below), and Adobe says “there’s no reason why we can’t go interactive and animate as well.”
Using Adobe AI technology Sensei, you can immediately colour black and white portrait photos and sketches with this Scribbler tool. Technology based on thousands of images allows the AI technology to recognise facial features and composite colours onto them. At this stage it just works with portraits and no other body features. To use Scribbler seemed fairly effortless in the demo. Simply upload an image or sketch one directly into the designated box and Sensei will quickly add colour to it in a separate box, as seen in the image above. This could be a helpful tool for character designers and artists. Adobe says it works across all skin tones, although the only examples demoed were people with white skin. You will also be able to add texture and materials to objects, as seen here.
First you need to upload an image of the texture you want.
Then you can crop the texture and drop it into the area of the image you want it to cover. You may need to put a few swatches to give the network an idea of what you want.
Then watch the texture wrap itself around your sketch.
Users will have the ability to tweak the colour of more trivial features like the eyes and lips as well. The style is meant to emulate reality, so at this stage there aren’t other painterly styles.
The technology is based on a paper that was published in July this year with Berkley, but the project started a year ago. Although there are other apps already that offer a similar service, Adobe says their work in concurrent to these.
When asked if tools like Scribbler, with the power of AI, are threatening creatives who master fields like painting, Adobe says it believes the tools won’t replace artists, but the tool is to enhance the artist and make the creative process easier.
Cloak is a simple but brilliant, helpful tool – it aims to be the easiest possible solution to removing unwanted objects from your video. Objects being a wide-ranging term – it could mean a spotlight, a smudge on a shirt, or an entire person if you want (or in this case, a couple).
You just need need to select the object you want to remove in After Effects using the Polygon tool to track around it, as seen in this image, and then track it throughout the duration of the video. You also have to render out the mask in order to run it.
Project Playful Palette
This tool is all about bringing the ability to mix colour palettes, like you would with traditional painting, into the digital realm.
Adobe says it can be frustrating only having limited digital tools without exploring colour blends. Adobe has taken what is essential about the physical painting experience that should be translated into a digital workflow.
With Project Playful Palette, you can mix colours together and easily and unmix them as much as you want throughout your creative process.
Using a digital colour palette dish to mix colour swatches together from the colour wheel, you can now edit colours from green to grey and back again, and when you do so the colour on your painting changes to match your preferred choice, as seen above. You can change swatch colours at any time, and use the Eye Dropper tool to find the colour you want on your image.
This technology isn’t trying to replicate the experience of traditional painting, but to reinvent it. Adobe says there’s nothing in the technology that wouldn’t allow your creative process history to be kept (so you can go back on it and change the pathway), but the prototype hasn’t developed this yet.
Project Quick 3D
3D modelling can be difficult to learn for designers who are only familiar with Photoshop and Illustrator, but Project Quick 3D makes it easier for graphic designers to create 3D models. It’s a sketch-based modelling application for Adobe Dimension (previously Project Felix) using Adobe Sensei, and the 3D model is created in the same angle you created the sketch. The models are impressively detailed.
User can create simple sketches with the technology and leverage it with AI technology Adobe Sensei to generate that rough sketch into a perfectly formed 3D asset, making 3D modeling as easy as drawing. See how you can go from sketch to 3D modeling in Dimension with the images below.
Project Scene Stitch
This tool is for when you want to remove part of a scene in a photograph and replace it with another scene using Adobe Sensei and Adobe Stock.
To do so currently, designers select the area you want to remove using the Quick Select tool in Photoshop, run Content and Fill, and then that part of the photograph is gone.
Content or fill tries to find other content in the image to put in the hole, but sometimes it creates repeated patterns.
With Project Scene Stitch, instead of searching the image for content, it uses Sensei to find other images for content by searching Adobe Stock. Automatically you get multiple possibilities to fill the image.
Look at replacing the entire foreground. Three no content in the top of the image to do anything in the bottom. You can swap out the bottom with a river, tennis court and other places that look very integrated into the image.
You can replace parts in a cityscape from 100 million images in Adobe Stock, you can look at semantics, not only fill holes but allows you to reinvent new image.
Moving objects around in Illustrator.
With Physics Pak, move elements into the app and scatter them into a rough shape and watch the elements grow and shift and mesh together before your eyes. The edges all fit together.
Physics Pak automatically moves your assets into the shape you want, and can add extra filling elements to even up the tacking. You would no longer need to spend two hours in Illustrator trying to fit assets together if this technology ever launches.
It uses Sensei simulation.
“Design done through the magic of physics,” says Paul Asente from Adobe.
Sonicscape Audio allows you to pick where in 360-degree video content you would want to place audio, so it’s relevant to a 360-degree experience. This content is perfect for VR production.
For example, if someone is in a VR experience and someone from the left is talking to them, you want to hear the sound coming from the left. At the moment, you often have to correct alignment in post according to Adobe.
If you open a wave file in Audition you’ll notice it has three channels, you can use these and pull them apart in Premiere Pro.
360-degree videos are captured with a 360-degree camera and audio recorded on the front-facing microphone. With this technology, you can align audio with the video by aligning the content with a visual cue. The app literally shows your audio with a visualisation.
Using this visualisation you can manipulate the location of the sound by moving it around the 360-degree video. You can also manipulate audio object in space and bring it closer to you.
Project Side Winder
Project Side Winder allows you to see an object in 3D within video footage, from all angles while you’re wearing a VR headset. So rather than looking at the object flat, you can move your head and see different angles of the image.
Using a VR headset, looking at scene, for example this Christmas Tree, in VR view the camera sits in one scene and records all round, but if you move your head around you can’t seen the object from another position.
With a Depth Map from Project Side Winder, every pixel and code is shown by how far away the object is, you can turn on project sideway and you can see the object better. Inside the headset you don’t need to move your head much. Even a little bit of motion shows more of a scene, a small glimpse of what VR will look like.
Project Deep Fill
Similar to Project Scene Stich, this prototype means you can select and remove objects from your face, or facial features such as eyebrows, for example.
At the moment you can do this using Photoshop Content Overfill, but it relies on copying other areas around it and it’s not always accurate.
If you want to adjust different parts of your face, such as taking away an unwanted plaster, you can use this tool to have a more accurate depiction of the face.
You can also use this tool to remove unwanted people from your travel pictures as well. Simply select the people you want to remove and click a button, and it’s removed. You can also adjust content, such as carving out a shape in a rock and it will change it.
Project Puppetron is a really fun tool, but also useful for changing styles in Character Animation.
Using the tool you can transform your face into any digital style, physical style, person or animal you want it to, with the ability to adjust how humanised you want it to look on when implemented on your face, as seen here.
Once again using Adobe AI technology, you can also use physical material, and can use a checklist to make sure all elements are what you want it to be, as seen here.
This app is designed to work with Character Animator so you can move the final asset in to animate the newly stylised face. You can literally turn yourself into a portrait or statue.