The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
asdfkjhasldfkjhas dflkjashdf aslkjdfh aslkdfjh
alkjsdfh askldfjh
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
We had just begun work on a project exploring how we might help a medical-liability insurance company increase patient safety by learning from past incidents. As we talked, our conversation shifted away from excitedly imagining ideas for data-driven tools toward ways our work might potentially do harm.
What if our designs ended up raising premiums for doctors and, in turn, healthcare costs for patients? What if they kicked doctors off their insurance? What if doctors stopped reporting adverse events to keep their premiums from rising? There was real potential that our work could result in reduced patient safety or increased cost of care. We could inadvertently build a tool that could be used against the very people we were trying to help.
Today, data systems and algorithms can be deployed at unprecedented scale and speed—and unintended consequences will affect people with that same scale and speed. How can we always make sure we’re putting people first when designing large scale systems? Especially when those systems will change over time and evolve without direct human supervision?
The seed planted during that bar conversation has grown into a set of principles, activities, and now a set of cards that our teams—both data scientists and designers across every other discipline—use to ensure we’re intentionally designing intelligent systems in service of people.
We’re far from the first people to ponder this. We’ve been inspired by organizations like AI Now and Data + Society, books like Weapons of Math Destruction and Technically Wrong, academic communities like FATML and CXI. In particular, we’ve been eagerly following O’Reilly’s series on data ethics (and encourage you to read their free ebook, Ethics and Data Science).
To develop our own set of guiding principles, we started with people. We talked to folks all across the globe: We interviewed IDEO teams about where they found challenges. We spoke to our clients about where they saw intelligent systems go awry. We spoke to the public about where smart designs seemed to cross lines. We observed and read about AI systems that had gone off the rails and worked to understand how this might have been avoided. We learned a lot.
We came up with an original design, iterated on them, and landed on a set of four design principles and ten activities that can help guide an ethically responsible, culturally considerate, and humanistic approach to designing with data. These activities are meant to provoke thought; they’re a vehicle for introducing new ideas and stimulating conversations around ethics throughout the design process.
To start, here are our principles:
Data is human-driven. Humans create, generate, collect, capture, and extend data. The results are often incomplete, and the process of analyzing them can be messy. Data can be biased through what is included or excluded, how it is interpreted, and how it is presented. Unpacking the human influence on data is essential to understanding how it can best serve our needs.
Just because AI can do something doesn’t mean that it should. When AI is incorporated into a design, designers should continually pay attention to whether people’s needs are changing, or an AI’s behavior is changing.
While there are policies and laws that shape the governance, collection, and use of data, we must hold ourselves to a higher standard than “will we get sued?” Consider design, governance of data use for new purposes, and communication of how people’s data will be used.
Just as with any design endeavor, we know that we’re not going to get it right the first time. Use unanticipated consequences and new unknowns as starting points for iteration.
To get started with the activities, you can download the cards here. We hope that these activities provoke dialogue and provide concrete tools to help our community ethically design intelligent systems.
Thank you to the larger team who helped make these cards come to life—Ben Healy, Jane Fulton Suri, Jess Freaner, Mike Stringer, Justin Massa, Connie Oh, and KP Gupta, who designed them.