- Your Code can gradually iterate to improve
- You are allowed to stop improving the code
- Start simple to add immediate value
- Debug in small chunks
- Readable code is understandable
- Automate at the appropriate interface
The bot has been through a few iterations:
- 0 – every few seconds it uses an electro magnetic pulse and then gives itself a new emp, therefore having infinite emp devices
- 1- every 100 milliseconds it iterates through the letters available and shoots that letter
- 2- fixes a bug in (1) so that it only shoots letters which are used on the level
- 3- once it knows what word is currently being shot at, it uses the letters in that word, but when it doesn’t know the word it shoots all the letters on the level
- 4 – only shoots letters on screen, every 10 milliseconds, either the start of a word then continues on the word
- 5 – only shoots letters on screen, which are the start of a word, then focusses on that word to shoot it to bits (doesn’t wait 10 milliseconds to finish the word) – 100% efficiency
- 6 – essentially bot 5, but only waits 2 milliseconds
I was having so much fun working on the bot that I ignored my common sense rule following facility and I actually tried to jump from 2 to 5 but failed.
I scrapped the buggy bot.
I then moved a small step from 2 to 3 and gradually moved up to 5, it was much easier that way. Meant that the code could be changed in small chunks, so no ‘big’ unfinished changes being checked in, no need to branch by abstraction because the changes were so small.
I can see some additional optimisations I could try on bot 6, but there comes a point when the code you have written to automate the system is ‘good enough’ and I think I’ve reached that point with bot 6.
watch bot 6 in action
The first bot is horrible.
But. It demonstrated:
- the feasibility of automating the game
- an initial understanding of the structure of the game
And while the bot was doing its thing. I could manually play the game alongside the bot, so I essentially had a bot (or tool) supporting my personal interaction with the game.
And bot 1 was horrible to listen to – bzzzzzz, because of an error in the bot code (Even though it got the job done) watch bot 1 beat a human
My bot was essentially a single block of code.
I would much prefer for it to have been separate functions, which were individually testable and tested, with readable names, etc.
But it didn’t. I wanted to minimise the invasive nature of the code, so I kept it to a single anonymous function.
But, that didn’t stop me pulling out chunks of the code, to run them at the console, as though they were functions.
Breaking the code into small chunks will help you understand and debug your code, whether that be:
Break the code into small chunks and name them appropriately to aid understanding.
Write your code so you can read it:
- format it well
- use names to explain the code rather than comments
- write in beautifier – http://jsbeautifier.org/
- compress for pasting into console – https://jscompress.com/
Ignore any “automation test pyramid”:
- understand the interfaces available to you
- understand what tools can automate what interface
Automate at a level that achieves your aims, with tools that support your understanding and process.
In this case, I automated:
- application methods that are one level below the keypress events
This meant I didn’t need a GUI automation tool, and I had easy access to application model data.
Do not ignore risks. This approach to automating does not mitigate:
- the risk that the keypress events do not trigger on all platforms
- the risk that the letters you have to type do not render correctly on the device
And the approach is not without technical risk. In theory, since I’m adding extra code into the application environment, my bot might interfere with the game.
But, it does demonstrate that:
- we can complete the game
- the game becomes incrementally harder on each wave
- each wave ‘we encountered’ was playable
Lessons learned from Automating – Instantiated