Author Archives: Toni

Developing BBS Themed Game – Cloudburst Connection

After a year of development, I’m incredibly excited to share the progress of my upcoming game with everyone!

Cloudburst Connection explores the life of a System Operator (Sysop) running a dial-up Bulletin Board System (BBS). Set your time circuits to the 80s, pop in your favorite new wave cassette, and get your modem running! Set in the neon-illuminated city of Cloudburst, you’ll be tasked with creating your dream BBS, building relationships with your users, and exploring other bulletin boards across the telephone network. Will you choose to grow your board into a major commercial success, or provide a free place for the community to enjoy the latest ANSI artwork? Or perhaps enter the underground world of hacking and phone phreaking? Cloudburst Connection is all about making connections – connections to computers, connections with people, and connections in conversations. Creating your BBS is only part of the fun – interactions with your users control the kind of Sysop you want to be. Build friendships and rivalries, make difficult decisions, and explore storylines through email, forum posts, and attending offline events. Experience online life before the Internet with Cloudburst Connection!

In addition to the single-player game, Cloudburst Connection features a multiplayer mode where you become a real Sysop! Not only can other players connect to your BBS, but you can program and share your own Python text-based games and applications. Now the only thing limiting your perfect BBS is your own imagination!

On a personal note, this project is very special to me and something I’ve wanted to do for a long time. A lot of my teenage life was spent calling and running bulletin boards. I made a lot of friends, learned a lot about programming, and had a lot of fun in a world that has now virtually disappeared in the wake of the Internet. Most folks out there have never heard of a BBS, and I wanted to create an accessible and fun way to share that world with them. For those lucky enough to have spent time on a BBS, I hope this game will bring back some good memories. More information will be coming soon (along with percent complete for each task), but here are some screenshots and an overview of where we currently are in development!

Single-player mode goals:

  • Create your perfect BBS, install games for your users, host files to download, provide services such as email and message forums
  • Visit other BBSes, discover secret networks, advertise your BBS, even hack your adversaries if you so choose
  • Grow the functionality of your BBS by crafting new features and services. Discover new recipes from text files and other online resources
  • Upgrade phone lines, modems, and hardware to allow more users to call your BBS
  • Build relationships with your users by responding to emails and message forum posts, as well as attending events with them offline
  • Fill your scrapbook of memories with photos and stories from meetups with your users
  • Calling other boards depletes focus, gain focus by sleeping
  • Purchasing files and other resources cost money, gain money by working a day job, selling files, or charging for BBS access
  • Explore different storylines to their conclusion with different game endings

Multi-player mode goals:

  • Allow real players to access your BBS through Steam
  • Create, host, and share BBS games created in Python. Features documented API for interacting with virtual computer from your script.

(Mostly) Finished:

  • Voxel style 3D environment
  • Virtual computer and hardware, including text mode VGA (ASCII/ANSI graphics), PC speaker, modem, etc
  • Full multithreaded operating system, virtual file system, and extensive API
  • Virtual public telephone network
  • BBS configuration for setting menu options and marking which files (resources) are available for NPCs to purchase & download
  • ANSI paint program for drawing text menus, screens, and artwork
  • Terminal program for dialing other BBSes (NPC and player)
  • BBS modules: File downloads, email
  • BBS games: Blackjack (single player), FlashFight (multiplayer)
  • Multiplayer: Python interoperability for running player-created scripts on virtual computer

Todo:

  • Crafting system – players can craft new BBS features in the single-player mode by combining downloaded files using the “Programming Editor”
  • BBS module: File Manager – players can view/manage files (resources) they download here
  • BBS module: Module – BBS statistics – players can review how well their BBS is doing here
  • BBS module: Event calendar – players can schedule and review upcoming meetings with NPCs here
  • BBS module: User directory – players can view the list of their NPC users here
  • Photo and scrapbook – photo attached to screen will update depending on NPCs the player is interacting with. The scrapbook will show pictures and text of events associated with NPCs
  • AI – NPCs will call the player’s and each others’ BBSes, write messages, play games, download files, etc
  • Dialog trees and events – the game will have an extensive amount of possible choices for conversations with NPCs
  • Assets – decorate your in-game room with lots of different furniture, art, knickknacks and collectibles

More info soon!

Updates and Tutorial Videos

To say I’ve been on a long hiatus since my last post would be an understatement. During that time, I snagged a Master’s and PhD in Computer Science, moved into a couple new positions at UNH, and worked on a number of projects. Things have been busy to say the least, but now that a semi-normal schedule is finally starting to return to my life, I really want to focus on things that are fulfilling and meaningful. One of these things is dedicating time to personal projects again and sharing them with the community. I love seeing what other folks are working on, and I want to contribute and give back as well. I have a new game I’ve been working on for the last six months that I hope people will really enjoy playing, and allow them to create their own virtual experiences in a very RETRO style. More details on that very soon!

In the meantime, I wanted to share two tutorial videos that might be able to help folks in the bioinformatics community. The first is a video on how to use PALADIN, a tool for characterizing metagenome shotgun data (see paper here). The second is a video on using the Linux program tmux, a tool for multiplexing multiple terminal sessions. Very useful for those who spend all day in a terminal window like myself!

Bioinformatics and PALADIN

I think one of the things I love about attending grad school so much is all the opportunity for collaboration – especially across disciplines. To be surrounded by so many people with passion for these different topics, all working toward discovery and creation – it’s really an amazing experience. One particular interdisciplinary area I’ve fallen in love with is bioinformatics. I’ve had interest in computational biology for a long time, as can be seen with my work with SynthNet – but this has been my first opportunity to work first hand with others in these areas – as well as experts squarely in the biology fields, which has been an extremely helpful learning experience.

Enter Bioinformatics
I’ve found bioinformatics, genomics, proteomics, etc to be especially interesting, as there are such a ridiculous number of inherent parallelisms between what occurs in nature and what we’ve discovered and devised in Computer Science. Obviously the underlying Turing-complete, algorithmic nature of things drives them both, but it’s still awe inspiring to see these processes in genetics happening naturally, and then be able to make predictions using the same rules that one would in CS.

PALADIN: software for rapid functional characterization of metagenomes

One such area in bioinformatics that we’ve been focused on for the last 6 months or so is the problem of identifying genes/proteins from metagenomic read sets. In a metagenomic sample, you have many organisms present, perhaps thousands – all these small pieces of DNA mixed together – and it presents a problem when you want to actually identify what was in there. Or more aptly, in our case, the function of what was in there. I love making the analogy to taking 500 different jigsaw puzzle boxes, opening them up, and dumping them all together. To make things worse, though the puzzles are different, some of them feature a lot of the same themes – flowers, grass, sky, etc. But let’s step it up – you also lose some pieces in the process, some get damaged and misshaped, and there are duplicates of others. Now try reconstructing all 500 puzzles – not so easy!

While there are lots of strategies and ways for computers to “reconstruct these puzzle pieces”, so to speak – many of them are slow and have inherent issues. We attempt to solve these speed and other issues with our new software, PALADIN, that I’ve been lucky enough to be the lead developer of, though it’s a 100% team effort for this kind of project

I won’t go into the full details of the software here, but if you’d like to learn more about it, our team, and details on the upcoming manuscript, you can read more about it on Professor Matt MacManes’ blog post.

And if you’re looking to try it out, visit our Github repository.

Overlapping Adaptive Mesh Refinement (AMR) in ParaView

First year of grad school done, second one about to start! It’s been an absolutely amazing experience so far – more to come about the bioinformatics aspect of it in future posts.

For this post, I wanted to discuss one of the plugins I developed during my scientific visualization class, and my newfound (at that time) love for the VTK framework and ParaView.

Visualize Everything!

If you haven’t encountered it before, VTK (Visualization Tool Kit) is a framework developed by Kitware for handling the entire pipeline process from consuming organized data, processing, filtering, visualizing, and/or exporting. They’ve also developed an accompanying GUI application for easily manipulating VTK, called ParaView. Both of these packages have been around for many years, but I had (unfortunately) not been exposed to them until my class – however, after using the software for a short time, I quickly realized how ridiculously powerful the framework is, and I really wanted to do more work with it.

The UNH Granite Scientific Database System

Being an open source pipeline, one of the places VTK shines is in its modularity and expandability – almost every part of the pipeline can call a custom plugin. I decided to take advantage of this – UNH has a custom Java library named The Granite Scientific Database System (Granite SDB), which provides a comprehensive set of classes for accessing multidimensional scientific data. It was originally developed, and continues to be maintained by, UNH Professor Daniel Bergeron, and has been expanded over the years by a number of students. While it is very powerful in its capabilities, it is designed strictly as a processing and storage library – it leaves the actual visualization routines to the developer. With this in mind, I thought it would be a perfect match to write a Granite plugin for ParaView.

Single Resolution Data

Before diving head first into the full capabilities of VTK, I decided to start with a simple read plugin for working with single resolution data. Since there are such a large number of medical images/datasets available in on the net (CT and MRI scans), especially in DICOM format, I decided to start with loading this data into Granite, and then seeing if I could visualize this with ParaView via the plugin. After only a few days, I was able to get perfect results! Here are some examples of sets loaded through the plugin:

Mummy
Mummy (CT)

Head Slice (MRI)
Head (MRI)

Beetle Micro CT
Beetle (Micro CT)

Cool stuff! With that working, I wanted to tackle more…

Multi and Adaptive Resolution

One of the challenges when visualizing data, especially large amounts of data, are the limitations of the underlying hardware. By necessity, different methods must be employed to only visualize the relevant portions, whether it be the amount rendered, areas rendered, streaming portions at a time, etc. Along these lines, VTK offers a newer portion of the pipeline that allows for streaming blocks of overlapping data at different resolutions. In this way, only specific areas (dependent on the viewport focus – direction and zoom) will be rendered, and done so in a streaming manner, so the user can continue to manipulate the program while searching for areas of interest in the render.

Long story short, the following video is the end result of the plugin supporting overlapping AMR. In this demo, the plugin resets the visualization every time the camera is rotated to demonstrate clearly how it operates. As can be seen, the data starts out at a very low resolution for quick rendering, then continues to resolve to higher and higher resolutions, centered around the area being viewed, as data is streamed from the Granite, through the plugin, into VTK. I used the same mummy CT scan as shown above.

For the plugin and documentation, check out the GitHub repository.

For the extended documentation, including theory, results, structure diagrams, citations, etc, see the plugin final report.

Personal Announcement

As can be seen by the post dates, I’ve experienced another one of my blogging hiatuses.  This was due mostly to going into crunch mode trying to finish up emissary RT – I really needed to get it wrapped up before September, as I needed to have my schedule wide open by the start of the month.  The reason why – I got accepted to grad school!  I’ll be starting the program to get my Masters in Computer Science in a few days, and needed to get this final item checked off my list.  I’m very excited for school, but equally happy to be finished with emissary RT – it was a fun project that I’ve long had the idea for, but after 2 years of development, I was ready for it to be complete.

In other news, I’ve really been diving back into my gaming roots lately.  I recently finished listening to Masters of Doom on audiobook (biography of John Carmack and Romero – get it NOW if you haven’t read it already!), and along with bringing back a HUGE slew of memories from gaming in the 90s (shareware like Commander Keen, BBSes, the start of the Internet, etc), it was also incredibly inspiring to hear the story of some passionate developers following their dreams and love of development.  Along with this, I also finally started playing with Unity, which I’ve been meaning to try for a while.  Long story short, I am completely hooked on the game engine, and incredibly ramped up to start a new game (it’s been too long since my last one), so along with working more on SynthNet, this will be my next big project.  More details soon!

New Website and Product

After many, many years (I’ve lost count at this point) of faithful service , I’ve finally refreshed the Synthetic Dreams website into something a little more modern and functional.  Take a look if you’ve got a moment, it’s built on Drupal (of course), and features a responsive design for those browsing on the go.

Additionally, after being in development for almost 2 years, I’ve finally finished emissary RT – an ODBC driver that allows you to access a whole slew of things, from your file system to DHCP and DNS.  The upshot of this is allowing you to use SQL (or the GUI in ODBC apps) to manipulate files and services in very powerful and automation-friendly ways.  You can check out the full details on the Synthetic Dreams site as well.

 

Article Featured on Qualcomm Spark Website

I realized while responding to some comments that I completely forgot to mention some exciting news!  Last month, I was fortunate enough to have an article featured on the Qualcomm Spark website, “Can We Grow Artificial Intelligence?”   It explores some of the capabilities we currently have of emulating DNA and biological growth, and incorporating these abilities into our normal programming tools to develop all sorts of AI.  I had a lot of fun writing it, as well as reading the other articles featured on the site.  So many exciting technologies on the horizon (or already here!)

 

Evolution Experimentation Module Complete

With the Genetic Mutation Engine completed, I wanted to put it to actual use.  While it’s fun to put complex SynthNet networks through the mutation process and watch the really cool looking results, manually doing it doesn’t really serve much of a purpose.  However, now that the Evolution Experimentation Module is complete, the real power of the mutation engine is unlocked.

Artificial Selection in Action

The Evolution module allows us to take an initial, manually created SynthNet network (as simple or complex as desired), test how effective it is in a task, and then either allow it to reproduce and continue on its genetic line, or prevent reproduction in the case of decreased task effectiveness.  It performs this across multiple “breeds”, or equally effective genomes, until a novel mutation shows improved performance, which is considered a new “species”.  This, in effect, emulates multiple genetic lines competing at a user-defined task, and artificial selection based on that task dictating the path of evolution of the SynthNet network.

Specifics of the module are as follows:

  1. Automatically and repeatedly mutates SynthNet DNA, grows its corresponding network, tests it, and records/compares the results
  2. Stores and manages all “breeds”, or equally effective genomes, across all mutations.
  3. Selects for new “species”, or more effective genomes, and blocks reproductions of less effective species.
  4. Detects cancerous (continuous) or unstable (requiring too much processor/memory to be feasibly used) networks and does not select for them.
  5. Can be used with any user-defined (programmed) task with a result that can be quantitatively graded, allowing full flexibility to direct artificial selection.
  6. Along with effectiveness, also stores structure (segment) count, neuron count, synapse count, effectiveness, and a graphical snapshot of each mutation.
  7. Stores all data into a MySQL database, to allow for easy continuation of experimentation after interruption, as well as viewing results on the web (coming soon!).
  8. Programmed in Python for easy use/alteration/integration.
  9. All interaction between the Evolution module and SynthNet is done via the Peripheral Nervous System Protocol, allow for remote use (allowing SynthNet to be run on a remoteserver with increased resources and client to be run at home).
  10. Also provides menu to send manual commands to a SynthNet network via PNSP for easy manual manipulation, testing, and troubleshooting.

I’m currently trying it out by artificially selecting for a neural network that can detect parity (even/odd) in numbers.  We’ll see how it does – once I have some results, I’ll be creating the front-end user interface to browse through mutations/results/pictures on the web.  Hopefully more on that soon!

 

 

Genetic Mutation Engine Functional

More SynthNet goodness today!  First off, I finished up the changes to the code that ensure all parts of SynthNet were relatively in sync with each other.  With one of my next big tasks being focusing on rate and temporal coding, timing within the system needs to be correct to support exact oscillating frequencies of action potentials, as well as resonance.  There were some significant changes to the code, so I needed to retest most parts of the entire program again.  That took up pretty much the month of August and beginning of Sept – all works well though!

Mutative Madness!

Before I took the next step and jumped into the neural coding work, I wanted to program the functionality that allows for the mutation of SynthNet’s virtual DNA, accommodating evolution experiments.  I finished up the mutation engine itself a couple nights ago, and am starting on the interface portion that will allow external programs to perform artificial selection experiments by monitoring the effectiveness of a DNA segment –  either continuing its mutation if successful, or discarding the genetic line and returning to a previous if less successful.

Below can be seen examples of the effects of mutation performed on a virtual DNA segment.  The first picture shows a network grown with the original, manually created DNA (the segment used in my classical conditioning experiment)

The next set of pictures show the results of a neural network grown using DNA that has undergone a .5% – 2% amount of mutation. Most were beautiful to look at, but the final two pictures were also completely functional, supporting the proper propagation of action potentials and integration of synaptic transmission – only with an entirely novel configuration!


Really amazing to look at (I think!)  Currently, SynthNet DNA can be exposed to the following types of errors in its genomic sequences:

  1. Deletion – Segments are removed entirely
  2. Duplication – Segments are copied in a contiguous block
  3. Inversion – Segments are written in reverse
  4. Insertion – Segments are moved and inserted into a remote section
  5. Translocation – Akin to Insertion, but two segments are swapped with each other
  6. Point Mutations – Specific virtual nucleotides are changed from one type into another

Currently, these operations result in in-frame mutations.  It was actually easier to allow frameshifts to occur – however, SynthNet DNA is more sensitive to framing, since whereas a biological read frame is across a codon (3 nucleotides), SynthNet DNA is variable from 1-6 virtual nucleotides.  When I allowed frameshifting, the results were high in nonsense mutations, which prevented almost any meaningful growth of neural structures.

Very excited with how things are turning out.  If, as seen in the last two pictures, we can get such novel pathway growth with a simple random mutation, I can’t wait to start the artificial selection routines and watch the results unfold!

 

Converting Your Corporate Intranet to Drupal

Though I have fun working on SynthNet and other projects at night, during the day I fill the role of mild-mannered network administrator at the Manchester-Boston Regional Airport (actually, the day job is quite a bit of fun as well). One of the ongoing projects I’ve taken on is adding all of our various Intranet-oriented services into a single platform for central management, easier use, and cost effectiveness. As mentioned in a previous article (linked to below, see NMS Integration), I knew Drupal was the right candidate for the job, simply due to the sheer number of modules available for a wide array of functionality, paired with constant patching and updates from the open source community.  We needed a versatile, sustainable solution that was completely customizable but wasn’t going to break the bank.

The Mission

The goal of our Drupal Intranet site was to provide the following functionality:

  1. PDF Document Management System
    1. Categorization, customized security, OCR
    2. Desktop integrated uploads
    3. Integration with asset management system
  2. Asset Management System
    1. Inventory database
    2. Barcode tracking
    3. Integration with our NMS (Zenoss)
    4. Integration with Document Management System (connect item with procurement documents such as invoices and purchase orders)
    5. Automated scanning/entry of values for computer-type assets (CPU/Memory/HD Size/MAC Address/etc)
    6. Physical network information (For network devices, switch and port device is connected to)
    7. For network switches, automated configuration backups
  3. Article Knowledgebase (categorization, customized security)
  4. Help Desk (ticketing, email integration, due dates, ownership, etc)
  5. Public Address System integration (Allow listening to PA System)
  6. Active Directory Integration (Users, groups, and security controlled from Windows AD)
  7. Other non-exciting generic databases (phone directories, etc)

Implementation

Amazingly enough, the core abilities of Drupal covered the vast majority of the required functionality out of the box.  By making use of custom content types with CCK fields, Taxonomy, Views, and Panels, the typical database functionality (entry, summary table listings, sorting, searching, filtering, etc) of the above items was reproduced easily.  However, specialized modules and custom coding was necessary for the following parts:

  1. Customized Security – Security was achieved for the most part via Taxonomy Access Control and Content Access.  TAC allowed us to control access to content based on user roles and categorization of said content (e.g. a user who was a member of the “executive staff” role would have access to documents with a specific taxonomy field set to “sensitive information”, whereas other users would not).  Additionally, Content Access allows you to further refine access down to the specific node level, so each document can have individual security assigned to it.
  2. OCR – This was the one of the few areas we chose to delve into a commercial product.  While there are some open source solutions out there, some of the commercial engines are still considerably more accurate, including the one we choose, ABBYY.  They make a Linux version of the software that can be driven via the shell.  With a little custom coding, we have the ABBYY software running on each PDF upload, turning it into an indexed PDF.  A preview of the document is shown in flash format by first creating a swf version (using pdf2swf), then using FlexPaper/SWF Tools.
  3. Linking Documents – This was performed with node references and the Node Reference Explorer module, allowing a user friendly popup dialogs to choose the content to link to.
  4. Desktop Integration – Instead of going through the full steps of creating a new node each time, choosing a file to upload, filling in fields, etc, we wanted the user to be able to right click a PDF file on their desktop, and select “Send To -> Document Archive” from Windows.  For this, we did end up doing a custom .NET application that established an HTTP connection to the Drupal site and POSTed the files to it.  Design of this application is an article in itself (maybe soon!).
  5. Barcoding – This was the last place we used a commercial product simply due to the close integration with our barcode printers (Zebra) – we wanted to stick with the ZebraDesigner product.  However, one of the options in the product is to accept the ID of the barcode from an outside source (text/xml/etc), so this was simply a matter of having Drupal put the appropriate ID of the current hardware item into a file and automating ZebraDesigner to open and print it.
  6. NMS (Zenoss) Integration – The article of how we accomplished this can be found here.
  7. Automated Switch Configuration Backups and Network Tracking – This just took a little custom coding and was not as difficult as it might seem.  Once all our network switches were entered into the asset management system and we had each IP address, during the Drupal cron hook, we had the module cURL the config via the web interface of the switch by feeding it a SHOW STARTUP-CONFIG command (e.g. http://IP/level/15/exec/-/show/startup-config/CR) – which was saved and attached to the node.  Additionally, we grabbed the MAC database off each switch (SHOW MAC-ADDRESS-TABLE), and parsed that, comparing the MAC addresses on each asset to each switch port, and recording the switch/port location into each asset.  We could now see where each device on the network was connected.  A more detailed description of the exact process used for this may also be a future article.
  8. Help Desk – While this could have been accomplished with a custom content type and views, we chose to make use of the Support Ticketing Module, as it had some added benefits (graphs, email integration, etc)
  9. Public Address System – Our PA system can generate ICECast streams of its audio.  We picked these up using the FFMp3 flash MP3 Live Stream Player.
  10. Automated Gathering of Hardware Info – For this, we made use of a free product called WinAudit loaded into the AD login scripts.  WinAudit will take a full accounting of pretty much everything on a computer (hardware, software, licenses, etc) and dump them to a csv/xml file.  We have all our AD machines taking audit during logins, then dumping these files to a central location for Drupal to update the asset database during the cronjob.
  11. Active Directory Integration – The first step was to ensure the apache server itself was a domain member, which we accomplished through the standard samba/winbind configurations.  We then setup the PAM Authentication module which allowed the Drupal login to make use of the PHP PAM package, which ultimately allows it to use standard Linux PAM authentication – which once integrated into AD, includes all AD accounts/groups.  A little custom coding was also done to ensure matching Drupal roles were created for each AD group a user was a part of – allowing us to control access with Drupal (see #1 above) via AD groups.

There was a liberal dose of code within a custom module to glue some of the pieces together in a clean fashion, but overall the system works really smoothly, even with heavy use.  And the best part is, it consists of mainly free software, which is awesome considering how much we would have paid had we gone completely commercial for everything.

Please feel free to shoot me any specific questions about functionality if you have them – there were a number of details I didn’t want to bog the article down with, but I’d be happy to share my experiences.