エピソード

  • Lessons learned
    2020/07/22

    What have we learned about machine learning and the human decisions that shape it? And is machine learning perhaps changing our minds about how the world outside of machine learning — also known as the world — works?

    For more information about the show, check out pair.withgoogle.com/thehardway/.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.

    続きを読む 一部表示
    33 分
  • Head to Head: The Even Bigger ML Smackdown!
    2020/07/22

    Yannick and David’s systems play against each other in 500 games. Who’s going to win? And what can we learn about how the ML may be working by thinking about the results?

    See the agents play each other in Tic-Tac-Two!


    For more information about the show, check out pair.withgoogle.com/thehardway/.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.

    続きを読む 一部表示
    24 分
  • Enter tic-tac-two
    2020/07/22

    David’s variant of tic-tac-toe that we’re calling tic-tac-two is only slightly different but turns out to be far more complex. This requires rethinking what the ML system will need in order to learn how to play, and how to represent that data.

    For more information about the show, check out pair.withgoogle.com/thehardway/.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.

    続きを読む 一部表示
    21 分
  • Head to Head: the Big ML Smackdown!
    2020/07/22

    David and Yannick’s tic-tac-toe ML agents face-off against each other in tic-tac-toe!

    See the agents play each other!


    For more information about the show, check out pair.withgoogle.com/thehardway/.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.


    続きを読む 一部表示
    25 分
  • Give that model a treat! : Reinforcement learning explained
    2020/07/22

    Switching gears, we focus on how Yannick’s been training his model using reinforcement learning. He explains the differences from David’s supervised learning approach. We find out how his system performs against a player that makes random tic-tac-toe moves.

    Resources:

    Deep Learning for JavaScript book

    Playing Atari with Deep Reinforcement Learning

    Two Minute Papers episode on Atari DQN

    For more information about the show, check out pair.withgoogle.com/thehardway/.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.


    続きを読む 一部表示
    26 分
  • Beating random: What it means to have trained a model
    2020/07/22

    David did it! He trained a machine learning model to play tic-tac-toe! (Well, with lots of help from Yannick.) How did the whole training experience go? How do you tell how training went? How did his model do against a player that makes random tic-tac-toe moves?

    For more information about the show, check out pair.withgoogle.com/thehardway/.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.

    続きを読む 一部表示
    17 分
  • From tic-tac-toe moves to ML model
    2020/07/22

    Once we have the data we need—thousands of sample games--how do we turn it into something the ML can train itself on? That means understanding how training works, and what a model is.

    Resources:
    See a definition of one-hot encoding

    For more information about the show, check out pair.withgoogle.com/thehardway.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.

    続きを読む 一部表示
    22 分
  • What does a tic-tac-toe board look like to machine learning?
    2020/07/22

    How should David represent the data needed to train his machine learning system? What does a tic-tac-toe board “look” like to ML? Should he train it on games or on individual boards? How does this decision affect how and how well the machine will learn to play? Plus, an intro to reinforcement learning, the approach Yannick will be taking.

    For more information about the show, check out pair.withgoogle.com/thehardway.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.

    続きを読む 一部表示
    23 分