We’re finally going to tackle the very thorny A.I. subject this week. A.I. – Part 1 A.I. has become a buzzy catchall term for applications that perform complex tasks that once required human input. From communicating to end users, to playing chess, to making movies, A.I. has been a tech that’s been with us longer than most people know. A.I. media headlines are dominated by news about A.I. art, but there’s also unprecedented progress in many widely disparate fields. Everything from videos to biology, programming, writing, translation, and more is seeing A.I. progress at the same incredible pace. The breakthroughs are all underpinned by a new class of A.I. models that are more flexible and powerful than anything that has come before. Because they were first used for language tasks like answering questions and writing essays, they’re often known as Large Language Models, or LLMs. There’s a holy trinity in machine learning: models, data, and compute. Models are algorithms that take inputs and produce outputs. Data refers to the examples the algorithms are trained on. To learn something, there must be enough data with enough richness that the algorithms can produce useful output. Models must be flexible enough to capture the complexity in the data. And finally, there has to be enough computing power to run the algorithms. Well, Graphic Card power brought computing power up to speed. Data, well, there’s plenty of that when you look at the Internet as a whole. The last part, Modelling, advanced due to a surprising source: attempting to translate between human ... Read More
続きを読む
一部表示