Colorblindness affects approximately 5-8% of the population, mainly males. The visualization of these kinds of colorblindness is eye-opening, to say the least.
The most common form of colorblindness is deuteranomaly, which is red-green. The colorblindness simulations show the color red as indistinguishable from a certain shade of green.
Actually to my eyes the dichromatic treatment renders both green and red as amber or ochre. Which means that yellow, orange, green, and red are seen as a light yellow to dark yellow spectrum, with purple as a darker form of blue.
This means that the use of red and green as distinguishably different should be avoided. The use of red-yellow distinctions or green-yellow distinctions could work.
Also now that the blue-purple distinction also gets muddled, which is not useful for the dark blue link and purple visited-link color standard in html.
There are three online marketing graphical components which should be made colorblind-safe
* Company logo and other visual brand design elements
* Website color scheme, including background, headers, subheads, links, and visited links
* Any graphical banner advertisements used anywhere on the web
Just like the company or brand name and website URL, which may not be changeable, the company brand colors and logos may be off-limits to optimization. However, any kind of navigation or non-core imagery should be viewed through the lens of the colorblind customer.
Rules of Thumb
Some rules of thumb can be generated from these research findings
* Do not mix green with either red or orange
* Use a darker green against light backgrounds, as light green appears as yellow
* Do not mix blue and purple, and avoid shades of blue to purple, they won't stand out
* Use the VischeckJ tool (below) which will show how differentiated a given logo and graphic scheme is from competitor brand visuals
As per the usual, the idea is to be different, but in a good way.
Syntax for Amazon ASIN (effectively a hyphen-less ISBN-10 of the ISBN-13)
Take output and use Code128 Font to display
The Amazon Barcode
Note for revision: the latest version of encode128 is using different start/stop characters than what is displayed/discussed below. Need to correct this. The main point is to use encode128 encoder and the code128* fonts from the same developer, which work well together.
Anyone wishing to use Amazon Advantage (for media publishers) or Amazon Fulfillment, needs to include barcodes on individual items, so that they are stored and selected as individual units. Amazon provides the following information in one of their PDFs:
Amazon Barcode Guidelines
If you would like to print barcodes directly on Units, use the UCC128 barcode. The UCC128 barcode standards are available on the Internet.
6.2. Amazon uses the UCC128 barcode (font) to encode the FNSKU or the ASIN in the barcode. We don't use any leading or trailing digits (application identifiers or checksum digits).
6.3. The full specification is UCC128 code set A (this is the code set that supports alphanumeric data).
6.4. If you are building the barcode from scratch, you can review the standards or purchase software (there are many barcode applications available for free or at reasonable prices).
To someone who doesn't know jack about barcodes, this is bewildering and unhelpful. Sure there are many barcode applications available for free or at reasonable prices but where are they, how do they work, and more importantly, which ones do the Amazon thing of UCC128 (which is not actually a name or standard of anything).
And so the adventure begins.
Code 128 A B C
First off, the 128 of Code 128 has to do with ASCII (which is 128 characters), some extended characters (all of Latin-1), and some clever compression (if compressible). Also, the GS1-128 shipping standard (formerly known as EAN) is a subset of Code 128. This means that all shipping and most product identification labels rely on all or some part of how this code works.
Secondly, the A,B,C are slightly different schemes (different character set support) which most system support all of. That is, there is code switching between the different sets, depending on what is encoded. A is A-Z, 0-9; B is A-Z, a-z, 0-9, and some punctuation; C is 00-99. B is the most common scheme (that can be used by itself, but C allows for compression if there is a pattern to the characters (e.g., repeated characters, or characters in series). If nothing fancy is needed then B is fine.
In Code 128 bar code encoding, each character consists of alternating three bars and three spaces (with a possible thickness of 1-4), a start character, a stop character, and just before the stop character is a checksum character, based on a calculation.
Note that each character in a bar code fits into 12 widths (that is, the three spaces and three lines much together add up to 12 widths). In practice, Code 128 is 11 widths, and lines and spaces are 1-4 widths in width. Each character starts with a line, and ends with a space, excluding the final end character which is 13 widths and ends with a 2-width line.
Starting and Stopping Characters
For Code 128 B Character Set, the starting character is Ñ (a capital N with an enya (tilde) overhead).The stopping character is Ó (capital O with an acute). Sometimes these are indicated as a different character, but what is most important, is having a font system that matches the control characters to be used.
The basic calculation is a modulo (remainder, after dividing by 103 non-delimiter characters) of the sum of the text string numerical values multiplied by their position. For a text string HELLO, we would see:
Start B = 104
H = 40
e = 69
l = 76
l = 76
o = 79
modulo 103 ((104)+(401)+(692)+(763)+(764)+(795))
(Note the 104 for Start B, whereas if the barcode were only a number, it would Start C.)
= modulo 103 (40+138+228+304+395)
= modulo 103 (1105)
= 10 remainder 76
76 is the lowercase letter *L (see
Encoded: Hello is: ÑHellolÓ
Practical Note: use the command line code 128 encoder to generate the codes, and then use a Code128* font to render it, or use the bookland python script to generate an .eps file (which is better when needing an encoded ISBN barcde with all the trimmings). Bookland can't just do a barcode, but interprets even an ISBN-10 without hyphens into an ISBN-13.
Bugnote: The code-128 has a problem rendering the character for Capital-I-Diareses (which looks like a divide-by sign as well), so there has to be a fall-back to non-compressed encoding (start and end characters, plus checksum, but no clever encoding). The start and end characters from code-128 are Ò and Ó, and for non-compressed it is Ñ and Ó. A simple LibreOffice Calc implementation of code128 for non-compressed is available.
Amazon Barcode Generation
While in the very first bit of text that Amazon provides, it says it does not use leading or trailing digits or checksum digits. This is not correct (at least from Amazon generated Purchase Orders and Shipping Labels). Amazon uses a Code 128 B scheme (their start character is a B Start) but constrains the codes to A-Z, 0-9 (no lower-case alphabetic characters). They do use a checksum.
Does this really matter? Probably not. Most likely, all user/customer/vendor-generated codes (with or without start/stop/checksum) will be recognized. Implementation for bar code scanners is in software, and even most of the mobile apps that do bar code recognition support codes with and without start, stop and checksums. However, it does make sense to use the same system that Amazon implements. And of course to understand how this works in case of needing to generate missing Amazon barcodes such as Purchase Order or Shipment numbers.
And so, now we know.
The simplest approach is two steps:
Take the given text string, use Code 128 B and calculate the checksum (as in the example above)
Append the start and stop control characters as well as the checksum character
Because characters always have the same numeric value, and the same bar code encoding, it would be simple to simply type in what will be represented (generally, capital letters and numbers only), and change the typeface, rendering the characters into bar codes. A great, open source, and free TTF font is available from
Online Encoding / Local Scripts
Note, most sites suck over time, and so even if we built a link to something that didn't suck, that would become untrue. Best to stick with code we can manage, that is forked github repositories, namely: bookland python script.
The Barton site has a handy Barcode encoding tool that generates the start, stop, and checksum characters. This makes it easy to copy/paste into a Barcode Font text and voila, a fully generated (and editable) barcode. However, it uses an old system so the control characters do not match up with the standard found in the Code128.TTF v.2.0. Also, the Barton site has the OLD 1.2 version of the Code128.TTF font. Avoid both.
Encoding and Barcode Generation Implementations
Visual Basic / VBA
Better (though a bit old school) is using Libre Office and a macro as offered by the amazing Grand Zebu, who was also the original source of this open source font (have to enable Macros to run in security settings).
Note: this no longer runs on the latest version of Libre Office.
While this is an aspect of a style guide (for the structure of the metadata itself), this is very much a devops document, for use and manipulation of metadata in publishing.
The basic point is that certain kinds of information about an image file is most conveniently kept inside the file itself, accessed and manipulated programmatically. One example use is presenting the dimensions of images in web pages using CSS. Since the dimensional information (width and height) is already present in the image itself, there is no reason why that information cannot be read and the CSS generated from it.
A more crude, but effective approach is to put this same information in the image filename, in some consistent way. However, there is more information that may be needed, such as title, caption, source, author, date, etc. The best approach would be to ensure that all editors can read (and manipulate) such information, and at the least not discard it.
It would be best if metadata could be added, modified at each stage of editing, and by default would be preserved on copy, export, format changes, compression, etc.
Generally things start with or result in various formats, including:
- .svg (native Inkscape)
- .xcf (native GIMP
- .ico is essentially a .png renamed
- .gif (rarely, if ever)
Turns out there is more than one way to stuff a keyword. For JPG there is EXIF, but for PNG or SVG there are other standards, as in PNG does not have EXIF. The common is XMP, an adobe XML format. Also there is IPTC tags. When looking at a file with a tool, generally a few different bits and bobs are present.
In SVG this is a Dublin Core standard, and in PNG the same is generally available.
Image Editors and Metadata Behavior
Currently, the most convenient tools (aka, the ones I use), and their status on JPG Exif and PNG metadata (XMP):
Will blank overwrite PNG metadata added by ImageMagick or Exiftool.1 However, if the metadata is written by Inkscape in an SVG file and then exported as PNG, it will read it correctly. This is because Inkscape puts the info in front of IDAT, but the others do not.
Cannot edit PNG metadata
Somewhat awkward XML editor for SVG metadata. Don't use this or it will corrupt the file.
File > Document Metadata provides a nice Dublin Core interface
EXIF, XMP metadata editor/viewer added as of Gimp 2.9.4 (possibly 2.9.2). It is under the menu > Image > Image Metadata. Note that these are the experimental/unstable releases (stable releases are every few years, so if you want this functionality, have to live on the edge a bit).
Partha's McGimp variants, the McGimp 2.9.5 64-bit Experimental, or McGimp 2.9.5 64-bit Color Corrected Experimental, are based on 2.9.4 (and are interesting projects in their own right, with HDR extensions and plugins).
Unfortunately GIMP won't keep metadata in the XCF native GIMP file.
Too bad this doesn't work, that is it cannot edit metadata in JPG files.
Turn off remove metadata options to preserve metadata
Can read and write all metadata, but does it in a way that OSX Preview will ignore/overwrite
If the metadata is originally created by Inkscape, it can edit it in a way that preserves
WordPress is rolling out a Native Font Stack which is meant to stop loading of external fonts (namely from Google). A very worthy endeavor. I've got this stack now running on mcneill.io toward the same end. I'm fine with the typography, and basically it has simply been a drop-in replacement for the previous stack.
> font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif;
See more info about Native Fonts in WordPress 4.6.
A smart home has a certain promise to it, that is, an intellect, a brain. But before we apply that metaphor we need to understand the basic elements of a home (and only after that, what would make it smart). But even the idea of a home is already too constraining and not accurate in other kinds of spaces, such as the workplace, or public areas. And so we want to take up the idea and promise of smart interior spaces.
Smart spaces do not necessarily have to be electronic, as smarts can be in simple spatial layout and design. One great example is a door. Doors can swing, slide, or raise and lower. Doors in fact don't have to exist, as there are entranceways without them (though they still would act as a portal).
> Note that a much more nuanced and psychologically (ecological psychologically) sound approach was done by Gibson around the nature of affordances. I spent a few semesters doing research in this area a decade ago. While the detail is quite worthy of study, the fundamental nature of an affordance is that it is a relationship between the animal and some thing (e.g., switch on a wall, door in a doorway).
Switches and Doors
Let us now take a simple thing such as a switch on a wall. It could be for any device, such as a lightswitch, but there is so much going on with a simple switch, including:
- Conveying information (position of switch could indicate present state (on/off)
- Allowing interaction of intentionality (that is, I press the switch to indicate my desire to turn on the light)
- While obvious, this actually conveys two things: information (desire/intention) and actual switch activation (electric circuit)
- Perhaps my wife wants to ask me to turn on the light, this is a voice command (and is a part of the built environment)
- Perhaps lighting is needed at certain days and times (say, when people are in a given room and there is not enough daylight)
- Upon power failure, a backup light source (or energy source) is needed, and that very light that might work in a variety of ways may not
And so switches, while we generally encounter them as fixed interfaces on walls, connected to electric circuits, they need not be, if we can break apart these various dimensions of interactivity.
- The informational switch will keep state
- The sensor switch will be able to detect a change of intention based on command or anticipation
- The electric switch will connect or disconnect power from a device (or otherwise signal the device to change state)
In terms of monitoring a smart space, all information and the ability to interact (either program intentions or directly command) can be made available. And in terms of interacting in the smart space, there should be some way of not misreading intentions. For example, overhearing a conversation and reacting to key words not meant for that context, or detecting motion and opening a door that is not required to be opened.
Commands can include the medium of voice and gesture, as well as the current paradigm of direct tactile interaction (opening a door, turning on a light).
Interactions by the human in a built environment, in order to be smart, need to properly and effectively communicate. More reactive kinds of smartness should then begin with sensors. The ability to sense the environment and humans (and possibly animals) therewith. Sound, gesture, haptics, temperature, ambient light and time. These are the basics for interaction in the first place:
- We turn on the light when we do not have enough of it
- We open and close a door when entering / leaving one room for another
- We turn on the heat or air conditioning when the temperature becomes uncomfortable
Besides levels of human comfort, additional smarts would be welcome if a space knew how to, say, kill off the mosquitos and ants in an area. Nathan Myhrvold has the great idea of shooting mosquitos with lasers, which sounds quite lovely.
Heating and Cooling Smarts
Temperature control becomes more important as one's place of habitation increases in uncomfortable extremes. A smart house actually should, without any effort, be climactically efficient, offsetting the surroundings. This would include things like insulation, door and window optimization, effective use (or disuse) of solar radiation, and the like.
Beyond that, some kind of ideal climate should be provided when humans need active climate management (thermostatic sensors and HVAC interventions). Since most air conditioners (the key component in Thailand) are ridiculously stupid, some kind of separate power-on/power-off management should be in place.
First Principle - Dumb Smarts + Smart Smarts
Dumb smarts are those things that provide built-in intelligence. Things that last longer are dumb intelligence and those things that require less maintenance (and zero energy) to retain value. This means that the smartest homes should be the smartest dumb homes first, and then have communications and sensor intelligence (that requires technology and energy).
Integration with OpenHub
A great piece of software, capable of running on a Raspberry Pi, is OpenHub. Check out the video. I really liked seeing the energy harvesting switches, aka piezoelectricity, rather than batteries.
Piezoelectric Wireless Sensors and Switches
To be honest, there is no physical restraint on the creation of piezoelectric keyboards and computer mice, just sheer laziness in the laboratories of Samsung, Apple, Microsoft, and the like. EnOcean has great products including wireless/self-powered switches and sensors.
Fonts, Typeface, and Typography -- what a mess. Not only does one have to repeat, essentially, the history of typography and letterpress printing to understand all this, but most fonts, like software, are protected and licensed in strange ways, which can increase risk.
And so, we turn to Open Source and Open Content licensing as well as interesting, beautiful and most of all useful typefaces to use these days.
Note that this page is updated on occasion.
There is not only special glyphs kerning (spacing) between specific sets of glyphs (e.g., x and each rounded glyph (a,c,d,e, etc.)), and ligatures, which are essentially unique hybrid glyphs representing two or my glyphs positioned next to each other (e.g., ff, fi, etc.).
> While this video is useful, it does not take into account that OpenType is not a superset of TrueType and there are some features in TrueType not available in OpenType (e.g., hand-hinting), and being an older font format, there may (still) be more tools that work better with it.[^ttf-vs-otf]
Digression into Languages, Scripts, Keyboards, Characters, Glyphs
First we start with languages, which may have one, more than one or no scripts (entirely verbal languages). Scripts consist of characters and glyphs, characters being discrete marks and glyphs being semantic. In many cases a glyph can be made up of multiple characters, and can also be modified in some way (kerning, position, etc.) based on combination.
Keyboards map keys to characters, but the specific font can have mapping tables which produce these custom glyphs when in some kind of proximity.
More than five years ago I posted something about fonts, namely comparison of a few typefaces. This is lost to the ages via bitrot, who knows? Doesn't matter, more going on recently.
Fonts of Importance
The ultimate goal will be working toward a nice set of typefaces which will support various scripts and collectively be a great toolbox for the creation of most kinds of documents and publications.
Types of Typefaces
Serif: Humanist - Caslon, Linux Libertine, Linux, Biolinum, Gentium
Sans Serif: Geometric - E.g., Beteckna, Futura
Sans Serif: Grotesk - E.g., Nimbus Sans, Franklin Gothic URW, Helvetica
Sans Serif: Humanist - E.g., Gill Sans, Verdana, Open Sans, Gentium
While this appears useful, and is based on the Vox-ATypI classification, the criticism that this is outdated, that small differences separate different categories, and ultimately that these categories are unhelpful in any meaningful and practical sense.
OTF has within it a different set of categories that can be used, including:
OS2 Width Class: From Expanded to Condensed
OS2 Weight Class: Standard style from very light, thin to heavy, black
PANOSE Family: Any, No fit, Text and display, Script, Decorative, Pictoral
PANOSE Serifs: Degree and type of serif (from normal sans through a variety of options)
PANOSE Weight: Standard style from very light, thin to heavy, black
Evolution is about adaptation (and natural selection) but the part most interesting is the adaptation that is possible in our individual lives. This does not mean we can become another species in one generation, but that we are an adaptive organism. Some folks talk about evolution in the same terms as fate. While this is correct (insofar as it is our fate to be human with specific attributes which are the result of evolution), it is much less important that what we can change and how we can leverage what we have into achievement.
Continue reading Evolution is Adaptation, not Fate
What does it take to create a new normal? For the body, to make regular exercise the default and not exercising the abnormal condition? By exercise I mean something well, meaningful.
Continue reading Up is Down
Sakichi Toyoda, the founder of Toyota Motor Company, is considered one of the greatest if not the greatest inventor of Japan and the father of Japanese Industrialization. His impact on the world should not be underestimated. As with most historical figures, our tasks are different because we live in a different world. However we can learn from the thinking of this great man.
Toyoda invented the Five Whys question asking method for discovering the root cause of events, particularly failure. The idea is to get at root causes rather than symptoms so that improvements rather than merely temporary fixes can be made to a system.
Root Cause Discovery is Difficult
Getting at root causes is not easy, and the method is not foolproof, but it is a profound and useful tool. Root cause analysis is a fundamental feature of innovative systems otherwise the changes to the system will be cosmetic, or worse will cause the system to further degrade.
Example of Root Cause 5 Whys Analysis
My car will not start. (the problem)
- Why? - The battery is dead. (first why)
- Why? - The alternator is not functioning. (second why)
- Why? - The alternator belt has broken. (third why)
- Why? - The alternator belt was well beyond its useful service life and has never been replaced. (fourth why)
- Why? - I have not been maintaining my car according to the recommended service schedule. (fifth why, a root cause)
The Five Whys at Lanna Innovation
At Lanna Innovation, we also ask the five whys, but not only in events of failure in terms of production, but failure in terms of a clash of understandings and in disagreements. Why do we disagree? What is the failure of perception or the failure of conception taking place? Understanding root causes is key to communication and product development processes as well as engineering quality control.
When a theory is firmly believed, it can take a decade to get the field to drop it even if the data falsifying the theory are there for all to see.