Friday, August 31, 2018

Harp/Jade Debug Snippet

I’m using Harp with Jade recently. At the beginning, it was hard for me to figure out the JSON data structure used by Harp at build time. It was also hard to debug JavaScript function written in Jade and executed at Harp compile time. In the end, I figured out that I could dump that JSON as a string to console.log in browser. Everything is so much easier now.

Now I have that debug.jade file in my project. Whenever I want to examine some JSON data in Harp, I just call != partial('debug', { data: anything }) and pass the right data.

Wednesday, May 16, 2018

Divide Notifications into Interrupt, Reminder and Backlog

iOS push notification

I ran an experimentation for more than a year. I put all of my iOS devices into permanent Do Not Disturb mode. It was a great experience, but I felt I should let very small amount of push notifications through.

My initial intention of setting permanent Do Not Disturb was to avoid distraction in my 1:1s. I could see how other people looked at their phones during 1:1. Sometimes it’s a push notification. Sometimes they were just not paying attention to me. I wanted to avoid doing that, so I put my iPhone into DND mode. My Apple Watch mirrored, so it didn’t make a sound or vibrate either.

My experience with permanent DND was good. I no longer had the unconscious reaction to look at my phone when it vibrated. I didn’t need to put away my phone, because it acted like it never received those push notifications until I proactively look at it. There was one small problem: Somtimes I missed those really important push notifications, and then I missed meeting or a time sensitive message.

Because I was very happy with the silence of almost all notifications, I didn’t want to change that. I only wanted a small patch to let some of them through. That triggered me to think about what my ideal push notification model would be. My conclusion was push notifications should be divided into 3 categories:

  • Interrupt: An interrupt should interrupt whatever transaction I’m in and get me to deal with it immediately. This should be extremely rare, and I shouldn’t miss when it happens.
  • Reminder: An ideal reminder should remind me of things in the perfect context, which usually means right time (in between transactions) and right place.
  • Backlog: This is like an email sitting in my Gmail unopened for years. If I want to take a look, I can. Otherwise, don’t come to me. Backlog should never be pushed. It should be pulled.

These are the ideal categories, but iOS doesn’t allow exact setup like this. I have to tweak a little bit and make it implementable within iOS. There are these constraints:

First of all, iOS push notification settings are on per app basis. Because most apps don’t provide finer granularity on push notification control, that means I have to assign category to apps. If I assign interrupt category to an app, every push notifications from that app is an interrupt.

Second, only built-in apps have complex notification settings for iOS and watchOS. For most apps, there’s only one setting for sound. It’s either on or off. Vibration is associated with sound and they are toggled together. There’s an additional toggle for Apple Watch. It’s all or nothing. That means if sound and vibration is on for iPhone, it’s also on for Apple Watch.

One more thing. Because existing technology can’t remind me in perfect context, I have to allow apps to remind me at its convenience and use snooze to make it wait for the perfect context.

Based on these constraints, here is my setup:

  • Interrupt: Apps in interrupt category can use sound/vibration. They can show notifications on Apple Watch. I have certain messaging apps in this category. They are not my main messaging apps.
  • Reminder: Apps in reminder category can also use sound/vibration. They need to provide a snooze button in notification. They show up on Apple Watch. I have some calendar and todo apps in this category.
  • Backlog: No sound and thus no vibration. Not allowed on Apple Watch. This is exactly the same as permanent DND mode.

Besides, I disabled badge for apps that are abusing it. If an app uses badge to get my attention frequently, I disabled badge for it.

There’s a 4th category that I would call null. Apps in this category use push notification to promote themselves in a way that’s not valuable to me. I disabled push notification for them.

This setup isn’t perfect. Apple doesn’t design iOS to work in this way. Apple gives some level of control to users, but no automation for power users. At the moment, I have to turn off sound for most apps one by one because they are in backlog category. I have to turn them off for Apple Watch, too. If I can make this default settings for all new apps, it could save me time.

This is my new experimentation for 2018. It’s working well so far. Maybe I’ll give another round of update in 2019 and identify new ways to tweak this. I hope iOS and Android have more power ways to manage notifications at that point.

Monday, May 14, 2018

DNS over Phone

This is very impractical, and I did it for fun anyway. If Cloudflare can do DNS over SMS, then somebody is going to build DNS over snail mail some day.

The DNS over Phone setup is very simple. We need a Workflow (iOS) to query DNS over HTTPS. We also need an IFTTT service to call ourselves. That’s it. Below are the Workflow and IFTTT applet you can import immediately:

IFTTT Applet

This applet uses “Workflow” as the trigger and uses “Phone Call (US only)” as the action. It’s built on IFTTT Platform so it’s shareable. It takes one ingredient from the input and announce it in the phone call. That ingredient’s value would be coming from the Workflow.


This Workflow asks for a domain. (If we give it an URL, it extracts the domain from the URL.) Then it sends the domain to Google Public DNS, which provides DNS over HTTPS service. The response from Google Public DNS is in JSON. We want to read json.Answer[json.Answer.length - 1].data from it, because that would be the IP address we are looking for. In the end, we trigger the IFTTT Applet with the IP address as the only ingredient.


Q: Why do we use Google instead of Cloudflare for DNS over HTTPS?
A: They provide JSON response in very similar format. Google’s response has Content-Type: application/x-javascript; charset=UTF-8 header, while Cloudflare’s has Content-Type: application/dns-json. That tiny bit of difference makes Workflow treating Cloudflare’s response as a binary file instead of text. There might be a way to get the text out of a file. When I figure that out I can provide Cloudflare as an option.

Q: Why do we read from the last item of json.Answer array?
A: If the domain uses CNAME record, then json.Answer will contain multiple items. The last item would be the A record pointing to the IP address. Other items would be CNAME records.

Sunday, April 08, 2018

Starting my Patreon experiments

I decided to start two experiments with two Patreon tiers.

The first one would allow patrons to read my blog posts 1 week before they are publicly available on my regular blogs (on I’m not sure how many people would actually buy this, because my blog posts are usually not time sensitive at all. It’s a way to allow patrons to encourage me to write blog posts more often. The 1 week gap is more like a symbolic reward.

The second one is more interesting. I’m opening 1:1 at $100/hr for anybody interested in talking to me. I know some people are willing to pay me to answer questions on Zhihu, but at a much lower price and probably needs much less time. The reason I hesitate to answer those questions is the lack of building trust and relationship. People can get an answer and then go, but that doesn’t creates any long term value. I want to see if I can building long lasting relationships through monthly 1:1 while getting paid at a reasonable rate.

Because monthly 1:1 isn’t really scalable, I limit 5 patrons at this tier. If this becomes popular, I’ll find a solution for the scalability problem. It’s a good-to-have problem, so I don’t think about it for now.

If you are interested in, here’s a link to my Patreon profile. You will be able to find the reward tiers there.

Tuesday, December 06, 2016

Anova Precision Cooker (Wi-Fi)


I was one of the backers of Anova’s Bluetooth precision cooker when it was on Kickstarter. It was useful and reliable. I started doing a lot of sous vide after I got it (because I really like meat and seafood). When their Wi-Fi version came out, I really want one. And finally, it’s on discount so I bought one.

The box looks nicer and bigger than the tube that came with the Bluetooth version. There are more friendly informations in it to help you get started. Overall, those are useful for new customers but are meaningless to me. I would just install the app and start cooking.

IMG_0744 IMG_0745

The first problem I ran into is it couldn’t connect to my Wi-Fi. I read through their troubleshooting for a few times and wondered what was wrong. I thought maybe because I had 5GHz network sharing the same SSID as 2.4GHz network, but I couldn’t turn it off because it’s Eero. What else could be wrong? Anova said Wi-Fi password should be between 8 and 18 characters. Mine is definitely much longer than that. I changed the password and it worked.

Changing Wi-Fi password isn’t free though. I have other smart things connected to my Wi-Fi, for example my Amazon Echo. After changing the Wi-Fi password, I have to update all these devices. None of them have password length limit. Anova is the first one that has such weird restriction.

Naturally my next step was to compare it with the Bluetooth version. They look almost exactly the same. You can only tell the difference by looking at the max/min water level marker. Their temperature readings were also different, so I got to run calibration.


When I received my Bluetooth version from Kickstarter, Anova said the first batch was not correctly calibrated. They said we could ship it back for calibration or we could go through ice bath to calibrate by ourselves. I was too lazy to do either one, so I expected it to be slightly off.

Now they were different, and they were both different from my thermometer, I wanted to calibrate. I didn’t know which one to trust, so I chose to use my thermometer as standard. (I’m still too lazy to do ice bath, because I don’t keep ice in my freezer.) I used their apps to adjust the offsets. In the end they were within 0.1 degree difference and I was happy with that.

Overall I like the Wi-Fi version, even though I haven’t had a chance to use it through Wi-Fi yet. The reason is I don’t cook at workdays, so it’s not common for me to start cooking while away. For food safety reason, I should keep the food in ice bath before starting cooking from remote. That’s another hard to satisfy requirement.

Sunday, July 17, 2016

Alexa, tell me a joke.

I bought Amazon Echo during the Prime Day. I wanted to know how useful this is and it was $50 off, so I placed the order. On the same day I bought 2 sets of Philips Hue starter kit from Best Buy. Each was $50 off, so it’s a great deal.

When I received the Amazon Echo, I thought the set up would be easy like Siri. I was wrong. First it needed to connect to Wi-Fi. There’s no way to use Bluetooth to control Echo and set up Wi-Fi. The way it works is it broadcasts an ad-hoc Wi-Fi and you connect to it to set up. After spending 20 minutes on setting it up, the only thing that’s useful and works for me is “Alexa, tell me a joke”.


What I really want Alexa to work with is Philips Hue and Logitech Harmony. It can scan Hue lightblubs by itself. It’s smart enough to get all of them. However when I said “Alexa, turn on lights in the bedroom” it told me there’s no “bedroom”. It took me some time to understand that the bedroom concept set within the Hue app isn’t shared. I needed to create a group named “bedroom” in Alexa. That is not very smart.

The next one to set up is Harmony. The official page says it’s compatible with Alex, but in reality it needs to go through IFTTT. After setting up IFTTT recipe, I found out I need to use the magic word “trigger” when proxying command through IFTTT. Instead of “Alexa, watch AppleTV”, I need to say “Alexa, trigger watch AppleTV”. If I forget the magic word, Alexa will complain about there’s no “AppleTV”. This is really annoying, because now I need to consciously think about the whether I’m talking to Harmony before issuing a command.

I set up Hue two days before Alexa. After setting up Alexa, it seems to create some kind of wireless interference around it. That made some of my Hue light bulbs unreachable from the bridge. I had to move the bridge around and switch channel. That’s extra inconvenience.

Sometimes I forget that I should control lights through Alexa or my phone. I just flip the switch and then realize I shouldn’t. I’m going to order light switch cover and Hue wireless dimmer. These two things are two separate patches. They don’t work together, which is a little bit disappointing. I don’t want to modify my light switch. I want to cover it and put the wireless dimmer on top of the cover. Since old style light switch cover doesn’t have a flat surface, I will have to put the wireless dimmer on the side.

Overall Alex does what I want it to do but doesn’t meet my expectation of intelligence. I think the problem is the lack of domain knowledge of each device it connects. Unlike Siri, which knows every type of query Apple can handle, Alexa doesn’t know how I group light bulbs into rooms or what kind of devices Harmony controls. I have to set up all kinds of routines to handle the domain knowledge while Alexa is just a voice interface to execute command. In some sense, Alexa isn’t much better than CLI or Alfred. I still need to do the scripting by myself. At best it can accept some parameters when triggering the script.

This reminds me of Firefox Ubuiquity, an ambitious project that wanted to process command like “book a flight to Chicago next Monday to Thursday, no red-eyes, the cheapest”. After 8 years, the ability to process this kind of command still seems so far away. Domain knowledge discovery is not solved.

Actually, it’s solved in theory but nobody cares. Look at the landscape of public APIs. All these so-called RESTful APIs are not RESTful. They are only CRUD APIs. Nobody cares about the real RESTful API that supports discovery and uses hypermedia transition. (If you don’t know the difference, read REST in Practice.) If devices have truly RESTful APIs, it’s possible for client to discover and negotiate parameters. With a hypertext document describing the APIs, Alexa or any voice interface should be able to describe parameters to me. If I ask Alexa to book a flight, it should be able to learn from the API that it requires date time and destination. If I’ve given these parameters, Alexa should pass them through; otherwise it can ask me and keep the conversation going.

Tuesday, July 01, 2014

The Dreams Fight Back

I have this lucid dream ability, or maybe I used to have. The ability grew gradually since high school, but it seems I started losing it since last year.

Before I had this ability, I would wake up (or not) after a nightmare. Then I found out I could continue the dream after waking up. When I wake up, I can clearly remember the last part of the nightmare. Because I’ve woken up, I can reconsider the situation and rewrite the dream as I like. If I was chased by enemy, I can give weapon to myself just like typing a cheat code in an FPS game.

Gradually this process became more and more smooth. Suppressing waking up became not waking up at all. As soon as I feel I’m unreasonably threatened, I know it’s a dream and I can turn the table around. After several years of practice, I can enter a state like god mode in a game. Chased by enemy again? Freeze the time and fly up to the sky. From there I can reduce the number of enemy and increase the number of ally. Then I can come back to the ground and resume the game. If it’s unbalanced due to the enemy being too weak, I can pause the game and rebalance it. It’s just like balancing a custom RTS map.

I also gained the ability to wake myself up if I’m aware of being in a dream. It’s like the opposite of suppressing waking up. If I go to the bathroom for three times, no matter whether I manage to use it, I know it’s a dream. Then I can tell myself I really need to leave the dream and use the bathroom. All I need to do is trying to feel my sleeping body and open my eyelids. That’s enough to kick me out of the dream.

The ability isn’t perfect. It’s based on pattern recognition as far as I can understand. Unreasonable threat is the easier one. After that, I observed the repeating bathroom pattern and set up the rule of three for myself. However there are situations that I’m aware of being in a dream but I can’t do anything about it. One example is when I feel I really miss someone while I manage to keep contact with that person in my dream.

There was one time I recognized that person I miss doesn’t exist in real life, so it must be a dream. However I can only alter dream content but I can’t suppress feeling from the outside. It’s just like the need to use bathroom, but the result is the opposite. Instead of waking myself up after realizing it’s a dream, I chose to cling to that feeling and refuse to wake up as long as possible.

That wasn’t the weirdest case, as my dreams started to fight back and take the ability from me. Like I said, feeling being threatened can make me recognize it’s a dream. So my dreams started to steer away from that pattern. Now I stopped dreaming about I’m being chased. Instead, I dreamed about city being blown up. It’s just like immersive gaming experience. I watch unreasonable things happen, but I don’t feel getting hurt thus I don’t recognize being in a dream.

The weirdest thing is I started to use in-dream ability without realizing it’s a dream. There was a recent experience like this: I felt somewhat threatened. I closed my eyes and tried to think about me being in another location. I opened my eyes to see if I’ve been teleported to that location. I knew that I don’t have the teleportation ability all the time and I couldn’t tell when I have it. This strange understanding tricked me into not recognizing the dream, but I managed to teleport myself anyway.

In the end, I can’t tell whether the recent experience is a step forward or backward. It can be explained as a step forward. The explanation would be I can recognize and alter my dream so smoothly that I don’t consciously recognize my dream and it happens unconsciously. The opposite is also obvious: My dreams try to fight back and stay away from the pattern that I already can recognize. I don’t know which one is true, but it’s an interesting experience.