If the goals of an AI system are opaque, and the user’s understanding of their role in calibrating that system are unclear, they will develop a mental model that suits their folk theories about AI, and their trust will be affected.If you aren’t aligned with a human need, you’re just going to build a very powerful system to address a very small-or perhaps nonexistent-problem. Machine learning won’t figure out what problems to solve. We developed the following truths as anchors for why it’s so important to take a human-centered approach to building products and systems powered by ML: Every stage in the ML lifecycle is ripe for innovation, from determining which models will be useful to build, to data collection, to annotation, to novel forms of prototyping and testing. It’ll be essential that they understand certain core ML concepts, unpack preconceptions about AI and its capabilities, and align around best-practices for building and maintaining trust. Just getting more UXers assigned to projects that use ML won’t be enough.
0 Comments
Leave a Reply. |