I knew I was going to use this display, but wasn't sure how to employ the touch screen. The main quandary I had was that the touch panel was twice the height of the display. The width was about right, but there would be half of the touch panel off the display. What if I dedicated that half of the touch panel to a full keyboard? The top half of the touch panel would interact with the on-screen objects while the bottom half is the keyboard. Here's how it looks:
The items on screen in this photo are part of the "test menu". Those are a bunch of tests that were used for the development. I left them in as part of the device in case I needed to troubleshoot something later.
The touch keyboard class looks like this:
class TouchKeyboard { public: typedef enum Prompt { leftBottom, rightTop, badData } Prompt; typedef enum MetaKeys { mkShift = 0x01, mkCtrl = 0x02, mkAlt = 0x04, mkCaps = 0x08 } MetaKeys; struct CalibrationPrompt { virtual void PromptUser(Prompt prompt) = 0; }; struct MetakeyStateChange { virtual void Changed(MetaKeys metaKeys) = 0; }; public: static boolean GetKeyboardCalibration(TouchScreen &screen, Rect &bounds, CalibrationPrompt &prompt); TouchKeyboard(TouchScreen &screen); TouchKeyboard(TouchScreen &screen, MetakeyStateChange &stateChange); // Return true if the calibration data needs to be written bool Initialize(CalibrationData &calibration, CalibrationPrompt &prompt, bool forceCalibration); bool isMetaKey(uint8_t keyCode); bool getKeyPressed(uint8_t &row, uint8_t &key, bool wait); bool getKeyPressed(uint8_t &row, uint8_t &key, bool wait, Point &pressPt); bool getScanCode(uint8_t &scanCode, bool wait, Point &pressPt); bool getKeyCode(uint8_t &keyCode, MetaKeys &metaKeys, bool wait, Point &pressPt, bool returnMeta = false); void toggleMetaKeyState(MetaKeys metaKeys); char ToASCII(uint8_t keyCode, MetaKeys metaKeys, bool clearMetaState = true); private: typedef struct MetaKeyBits { bool _shift:1, _ctrl:1, _alt:1, _caps:1; } MetaKeyBits; TouchScreen *_screen; MetakeyStateChange *_stateChange; Rect _bounds; union { MetaKeys _metaKeys; MetaKeyBits _metaKeyBits; }; void clearMetaState(void); void metaStateChanged(void); };
If you look at the whole source code, you can clearly see there was some level of influence by, ahem, Windows... The keyboard employs scan-codes, which represent the "physical" (ok, virtual) keys, which are then translated to Virtual Key codes. Those keycodes coupled with the meta-key state (Shift, Control, Alt) are used to get an actual character (in ASCII) code.
The MetakeyStateChange is a simple "interface" which is implemented by the client code in order to be notified of the internal state of the meta-keys. I used this callback to display the meta-key state on the screen.
If you notice, the constructors only take a reference to the TouchScreen class and no location of the keyboard. This is handled through the Initialize() method into which the CalibrationData is passed in or, it the data is uninitialized, it will invoke a "keyboard calibration" process. That is the purpose of the CalibrationPrompt callback interface. It will ask the user to touch the top-left and bottom-right of the location of the keyboard. This will form the bounds where the keyboard will respond to touch events and translate them to scan-code and characters. The idea is the the CalibrationData is persisted into the EEPROM on the Arduino so the next boot, the keyboard is already calibrated.
The bounds of the keyboard are internally scaled to normalize the bounds to an internal virtual size. Once a touch is determined to be within the bounds, the internal static meta-data is scanned (see the TouchKeyboard.cpp file) to determine the key pressed. getKeyPressed will return keyboard row and column along with the actual point pressed as returned from the TouchScreen class.
Having a full keyboard was really nice to have because I added an SDCard Shield to the Ardunio so that I could save the data. With the keyboard, I could enter a filename into which the data is written using the SD card libraries. Coupled with the on-screen, touch-sensitive object, it also eliminated the need for any other physical buttons or switches. All control and input is handled through the touch panel.
As you can see, it turned out that most of the software I developed for this device was for things other than it's direct purpose. When I present the Bluetooth connected version, the need for sophisticated interactive software was unnecessary... The UI was an Android application I wrote to communicate via Bluetooth.
Next up I'll look at some of the UI widgets and the EventLoop class that brings things together.
No comments:
Post a Comment
Please keep your comments related to the post on which you are commenting. No spam, personal attacks, or general nastiness. I will be watching and will delete comments I find irrelevant, offensive and unnecessary.