The React TodoMVC example uses Director for routing, so I decided to use it in Project Wonderchicken. The API looked straightforward, but as usual, the toy examples proved to be inadequate for real-world use cases. Luckily, I am somewhat competent and know how to use Google and StackOverflow. Unfortunately the only results were about film directors and Adobe Director. What I needed was an example of how to use Director with React and the Flux pattern in a modular way without hard-coding paths in every component and being able to trigger state changes appropriately.
The optgroup tag is not used much with drop-down lists, so it is not surprising that WTForms does not have an abstraction for optgroups. That is a not big deal until one day they are actually needed. This was the case with Project Wonderchicken last week, and as usual this led to a lot of googling. Eventually I gave up and decided to create my own select field with optgroup support.
I know what you're thinking. What is Project Wonderchicken? Well, I hate to tell you, but it has nothing to do with creating flying chickens with super strength and laser vision. Really, it's just the codename for a relatively mundane web application I started last month.
If you're like me, sometimes you want to rename or move your Vagrant project to a new directory. Maybe you changed the name of your project or decided that the directory structure of your Vagrant projects was suboptimal. There are a lot of cases where changing directories makes sense. Unfortunately you can't just do
mv old_directory new_directory. A subsequent
vagrant up will nuke your existing VM and create a new one.
The LinkSprite Sim900 GSM shield can work with a Raspberry Pi, but you will need an Arduino, a compatible USB cable, and an external power source for the shield. An Arduino board with a 16 MHz clock may be necessary.* In short, it works the same way as with your laptop, except for the extra power for the shield. This set up is not as clean compared to the pcDuino v2, but Raspberry Pis are more popular these days, so I thought it'd be interesting to see how it would work.
With the exception of the API, all example/exercise pages are generated as static HTML pages. The major influences for this decision were based off my experience using Jekyll and Khan Academy's old exercise framework. There are a lot of static page generators out there. Even I developed one two years ago using node.js and markdown no less. One of the draws is that they're fast and easy to develop - far simpler than the traditional CRUD blog that people build when learning a new web framework or programming language.
This week I integrated the LinkSprite Sim900 GSM shield with a pcDuino v2. The latter can be thought of as a slightly more powerful Raspberry Pi with built-in WiFi and Arduino headers. The nice thing about the pcDuino v2 is that you can access the hardware interfaces using any programming language. Python and C libraries can be downloaded from Github, but be warned that the Python library is buggy and incomplete. There is enough implemented to get started with controlling the GPIO interfaces. Serial and SPI are missing, but regular python libraries exist. For the integration, I chose Python since this project involved string processing and some database storage. The only hardware access required was a serial connection to the GSM shield. This proved tricky and uncovered some quirks that others may find helpful.
Docker is the software makes my "Code Examples" project possible. I do not have the funding to run a small network of distributed virtual machines (VMs). This is the promise of Docker, the ability to run lightweight applications in isolated virtualized environments. Actually, the real killer of idea of Docker is that it adds an easy to use interface to create and run Linux Containers (LXC). Docker is still early in development, but it can be a viable option to securely execute untrusted code once it is clear that the isolated environments can't be broken out of.
The previous posts illustrated different techniques that could be used in an AI to beat Wolfenstein 3D using machine vision. Actually applying the techniques together in real-time proved to be difficult, but it was necessary to start the test runs. The current bot uses a simple state machine to perform tasks such as localization, door searching, and attacking enemies. The code is very rough at the moment. There is still along way to go before the bot can beat the first level of the game, but it’s definitely possible.
Thresholding was once again used for measuring walls. Specifically the ceiling and floors are shades of gray. This does a good job except in a few cases, namely the case of ceiling lights or when the room has gray walls.
Detecting enemies was much harder than detecting doors. The enemies wear tan-colored images. Unfortunately this hue is similar to the player’s hand and also some of the walls are brown.
In the previous post about door thresholding, the strategy was to use RGB. The results were not sufficient. After studying the doors further, HSV proved to be a good choice since the hue remained roughly the same among the different shades of teal. This only worked because the game does not apply dynamic lighting.
Because I lack creativity for naming projects, this project is currently named "Code Examples" and was previously known as "Java Examples." The seed for this project came from tutoring in the CS lab the last two semesters. At my university, introductory programming courses are taught in Java and C++, so most of my efforts go into explaining loops, conditional statements, variables, and some object-oriented programming. Not many students come in for tutoring. It tends to be the same handful of students who come in every time a new assignment is released. Any more and I wouldn't be able to help everyone. This kind of speaks to how tutoring is not scalable. This is the same issue I noticed as a reading tutor at an elementary school. Certainly there are great benefits of having one-on-one attention, but it limits reach.
SIFT is an awesome algorithm, but it is somewhat computational expensive despite many optimizations to improve performance. In games, fast responses are important for success, so even a few milliseconds of delay could be detrimental on the harder difficulty levels of Wolfenstein 3D. Additionally one could argue that the simpler solution is the best. Thresholding can be effective in Machine Vision problems because the program can be optimized to the problem domain instead of being generalized to say many different games.
The following is my first attempt at implementing the thinning algorithms described in Davies' Computer and Machine Vision book. This was one of the assignments in the Computer and Machine vision course I'm taking. As a bonus, there's also some details on implementing the P4 PBM image format.
I spent the weekend trying to implement SIFT based on Lowe's paper "Distinctive Image Features from Scale-Invariant Keypoints." Needless to say it was a fun weekend. The main focus was on the first two steps of the algorithm: "Detection of scale-space extrema" and "Accurate keypoint localization."
One of my worries is that SIFT may not work as well on Wolfenstein 3D. The resolution is only 640 x 480 at best and there aren't many colors used. The floor and ceiling and solid shades of gray - the reason for this is described in the awesomely great book "Masters of Doom" by David Kushner. To test, the viability of SIFT, I ran the OpenCV implementation of SIFT against some sample screenshots from the game.
My favorite part about my Machine and Computer Vision course is that we get to implement some algorithms from the ground up instead of relying on OpenCV. I've also taken it upon myself to learn and implement Sobel, Canny, and the Hough transform. My implementations need more refining, although I finally worked out the kinks from my Sobel code. I think that's the best way to learn. It takes a long time, but it's worth it. For this project, I would like to implement SIFT from the ground up instead of using the OpenCV version. This post goes through a few smaller algorithms that I needed to implement to complete the first step of the SIFT algorithm.
Betelbot is the name my of failed autonomous robot project from last year. It's one of those projects that I want to restart and learn from my mistakes, specifically on the hardware/electronics side. edX has a promising embedded systems course that I plan to work through in the summer. The hope is that that course will provide a stable foundation. Electronics is also an expensive hobby and I can't really afford to buy parts to build Betelbot 2.0 yet.
For this project to work, the program needs to be able to simulate keyboard events to control the character. In addition, it would be nice to be able to grab frames from the game window as if the game were being recorded. The other requirement was this all needed to run on Linux since Windows 7 runs too slow from a VM. Lubuntu on a VM, on the other hand, is lightweight and runs well. After some research, X11 turned out to be what I needed.
One of the purposes of this project is to apply what I've learned on a project that has no clear solution and or outcome devised by a teacher. Sometimes I hear people complain that they never use anything they learned as a Computer Science student. I don't think that's true. It's just a matter of creating opportunities where this knowledge becomes relevant and useful. While working on this project, I've had to go through a lot of resources to help solve various problems, so I thought I'd list them here.
I've been enjoying my Computer and Machine Vision course this semester. I like it because it's challenging and makes me want to get better at math. Specifically I need to learn Linear Algebra and get started on Calculus III material. After that, maybe bulk up on more statistics. A new appreciation for math has been the main benefit of working on a computer science degree. Math is amazing, and I think that hints at how to make math relevant to kids. As usual I'm getting off topic. The main point of this post is to go through some rough requirements for a Machine Vision project that I'm working on now. Specifically I want to use develop an AI to play Wolfenstein 3D via a Kinect and Beagleboard xM.
A common problem in digital image processing is detecting straight lines. The brute force solution is to test every point for lines. This approach is computationally intensive.
I haven't found time to work on the actual RPG lately, but I did have some time to experiment with huffman coding as a way to shorten the number of characters used to represent the image data. It's doubtful that this improvement will make the Khan Academy CS platform more usable since the uncompressed data will still be large. It's something that needs to be tested at some point.
The Sobel transform is an edge detection algorithm. Edges are identified by calculating the difference in surrounding pixels. Pixels with large differences are likely to be edges. This is why the edges are white and non-edges are black. In 8-bit grayscale, white is 255 and black is 0.
The eight-puzzle is one of my favorite programming assignments. I've seen it multiple times in introductory AI courses, but it could be an effective final assignment for a data structures course. The good implementation covers most of the important topics. For instance students will quickly learn that an implementation using a linked list will be painfully slow, especially when coupled with breadth-first search. Change it to a 15 puzzle and it becomes even more noticeable. This should hopefully lead students to search for better data structures, such as replacing their linked list with a priority queue that uses a heap.
Since the semester started, I haven't worked on my Math RPG project. Most of my time has been spent memorizing terms in my Anatomy and Physiology course and working on some smaller programming projects. One of those projects was an attempt to port my RPG to the Khan Academy Computer Science platform. There are some cool programs there, especially the 3D-Minecraft program created by John Resig. My first experiment was to see if I could use my sprites on their platform since external images are not allowed for obvious reasons.
When I first learned about iPython Notebook last year, I was excited. What a great way to add interactivity to written content. I think I first learned about it when someone released an open source data mining book on Github that used iPython notebook to run the examples. Again, exciting stuff.
This is a repost of one of my recent Stack Overflow answers. Basically the question was how to send image data over WebRTC data channels. The question caught my eye because last weekend I was experimenting with WebRTC camera basics. I plan to do a write up about that experiment in an another post. In the meantime, you can check out the demo here. In terms of the Stack Overflow answer, I spent about three hours researching the issue and learned quite a bit about WebRTC.
Development has slowed down quite a bit in the last few weeks. Part of that is school started last week. But mainly I hit a wall and have been indecisive on how to proceed. As usual my inner-perfectionist wants to clean up the game engine and start adding unit tests and some automation. This week I decided that I would work more on structure even that means no new features on the game. The biggest thing is to just keep hacking away at the project even if I'm doing things aren't necessary yet.
I'm taking Anatomy and Physiology 1 (A&P1 from here on) this semester because it meets one of my science requirements. It just so happens that I received transfer credits for Biology 1, which I took at a previous University. This means that I just need to take A&P1 with corresponding lab. Otherwise I would need to take two Astrology courses. In terms of time and cost, taking A&P1 is the logical choice even though it is a lot of work, particularly in the area of memorizing hundreds and hundreds of terms.
I went against my plan of focusing on one project and started working on an iOS app tentatively called Quotidian, which basically means something that happens every day. The main reason for working on this project is that two prospective employers asked if I knew mobile development. I told them that I didn't have much experience.
I've been trying teach myself Machine Learning for the past year and a half. It has not gone well due to my tendency to procrastinate and take on too much. So far I've worked through the first four programming assignments and watched the videos up to week 5 on the Coursera Machine Learning course, watched the first two lectures and attempted the first assignment from the Edx course, worked through six weeks of the Coursera Recommender Systems course, watched the first two weeks of the Coursera Natural Language Processing course, tried to learn Hadoop from a book, and attempted Kaggle's beginners challenge where the goal is to predict who survived the Titanic. I've reached the point where I need to develop a strategy and stick with it.
I spent the last two weeks working on the combat system. Specifically I wanted to integrate basic math functionality into the system. My main goal was to implement something usable for the first iteration. The actual implementation was actually straightforward compared to the design. I'm not a designer or an expert in UX, so the process amounts to trial and error. I not only want the game to look good, but I want the interface to make sense.
I started thinking about storylines and game play mechanics the last few days. That's a good sign. It means that I'm starting to believe I can follow through with this project. The self-doubt has faded for now and with school out there's just more time to work on this every day. The big test will be when school starts again. Will I make the time necessary?
I made good progress on the combat system, I still got a ways to go before the game can be considered fun. As usual I would've liked to have done more. To help keep me on track I started using the Github issue tracker - normally I use Trello, but thought it would be best to keep my efforts public. My first milestone was to finish parts of the combat system this week. Unfortunately I didn't complete the milestone. I went off track and spent a few days on sprite work instead of the combat system.
Just finished finals. I should be happy about this, but two days in and I don't know what to do with myself. Mostly I feel tired and sleepy. I think because of all the late nights and Americano's that I chugged down this week. The point of this aside though, is that I didn't work on my RPG for two weeks and I'm not happy about that. It's frustrating.
Thanks to the "Don't Break the Chain" method, I made decent progress on the game engine this week. Still not where I want to be, but I'm happy that I put in at least an hour each day this week. It definitely adds up and I'm finding it easier to get started. Ideally I'd work on this project out of sheer compulsion. Unfortunately that's just not how I work. Apparently the only option is to trick myself with a calendar full of red X's.
In my AI course this semester, one of the projects was to develop a competent AI for the game Owari. The class had a month to work on their bots and on the due date, we held a round-robin tournament. Top 2 teams received gift cards for pizza! So of course I was motivated. I can't resist free food. And it turned out, so were some of my classmates. There were a lot of creative implementations.
My current project is an RPG with a math-based combat system. I plan to post weekly updates about my progress. My hope is that this will motivate me to complete one of my project ideas for once.