Artificial Intelligence
|
Machine Learning |
25 Jun 2020
Updated: 25 Jun 2020
Rating: 3.26/5
Votes: 9
Popularity: 3.11
Licence: CPOL
Views: 13,172
Bookmarked: 6
Downloaded: 0
In this article, you will be up and running, and will have done your first piece of reinforcement learning.
|
|
26 Jun 2020
Updated: 26 Jun 2020
Rating: 5.00/5
Votes: 3
Popularity: 2.39
Licence: CPOL
Views: 6,985
Bookmarked: 0
Downloaded: 0
In this article, we will see what’s going on behind the scenes and what options are available for changing the reinforcement learning.
|
|
29 Jun 2020
Updated: 29 Jun 2020
Rating: 5.00/5
Votes: 3
Popularity: 2.39
Licence: CPOL
Views: 15,958
Bookmarked: 3
Downloaded: 0
In this article, we start to look at the OpenAI Gym environment and the Atari game Breakout.
|
|
30 Jun 2020
Updated: 30 Jun 2020
Rating: 5.00/5
Votes: 3
Popularity: 2.39
Licence: CPOL
Views: 8,367
Bookmarked: 5
Downloaded: 0
In this article, we will see how you can use a different learning algorithm (plus more cores and a GPU) to train much faster on the mountain car environment.
|
|
2 Jul 2020
Updated: 2 Jul 2020
Rating: 5.00/5
Votes: 3
Popularity: 2.39
Licence: CPOL
Views: 8,015
Bookmarked: 2
Downloaded: 0
In this article we will learn from the contents of the game’s RAM instead of the pixels.
|
|
3 Jul 2020
Updated: 3 Jul 2020
Rating: 5.00/5
Votes: 1
Popularity: 0.00
Licence: CPOL
Views: 6,563
Bookmarked: 2
Downloaded: 0
In this article, we will see how we can improve by approaching the RAM in a slightly different way.
|
|
6 Jul 2020
Updated: 6 Jul 2020
Rating: 5.00/5
Votes: 1
Popularity: 0.00
Licence: CPOL
Views: 6,351
Bookmarked: 1
Downloaded: 0
In this final article in this series, we will look at slightly more advanced topics: minimizing the "jitter" of our Breakout-playing agent, as well as performing grid searches for hyperparameters.
|
|
25 Sep 2020
Updated: 25 Sep 2020
Rating: 5.00/5
Votes: 4
Popularity: 3.01
Licence: CPOL
Views: 8,411
Bookmarked: 8
Downloaded: 0
In this article, we set up with the Bullet physics simulator as a basis for doing some reinforcement learning in continuous control environments.
|
|
28 Sep 2020
Updated: 28 Sep 2020
Rating: 5.00/5
Votes: 1
Popularity: 0.00
Licence: CPOL
Views: 10,753
Bookmarked: 4
Downloaded: 0
In this article, we look at two of the simpler locomotion environments that PyBullet makes available and train agents to solve them.
|
|
29 Sep 2020
Updated: 29 Sep 2020
Rating: 5.00/5
Votes: 3
Popularity: 2.39
Licence: CPOL
Views: 9,603
Bookmarked: 2
Downloaded: 0
In this article in the series we start to focus on one particular, more complex environment that PyBullet makes available: Humanoid, in which we must train a human-like agent to walk on two legs.
|
|
30 Sep 2020
Updated: 30 Sep 2020
Rating: 5.00/5
Votes: 2
Popularity: 1.51
Licence: CPOL
Views: 7,172
Bookmarked: 3
Downloaded: 0
In this article we will adapt our code to train the Humanoid environment using a different algorithm: Soft Actor-Critic (SAC).
|
|
1 Oct 2020
Updated: 1 Oct 2020
Rating: 5.00/5
Votes: 3
Popularity: 2.39
Licence: CPOL
Views: 6,281
Bookmarked: 3
Downloaded: 0
In this article we will try to train our agent to run backwards instead of forwards.
|
|
2 Oct 2020
Updated: 2 Oct 2020
Rating: 5.00/5
Votes: 2
Popularity: 1.51
Licence: CPOL
Views: 6,603
Bookmarked: 2
Downloaded: 0
In article in this series we will look at even deeper customisation: editing the XML-based model of the figure and then training the result.
|