Svelte: using jQuery plugin with Action API


You may have used jQuery in your projects before, however, with the advancements in Frontend frameworks, we are sort of forced to forget about the vast ecosystem of jQuery and its plugins. And the main reason is that your framework of choice is not compatible with jQuery or does not have a jQuery friendly API out of the box. So we either look for an alternative library that is written from scratch or a wrapper library on top of jQuery/Javascript plugin written in your framework of choice. However, Svelte is a different beast and it has something called Action API that lets you use/consume jQuery/Javascript plugin without much framework-related overhead.

I had tweeted my wild prediction last time which what I have show-cased in this tutorial video.

You can watch the video below👇 to see it in action or grab the source code from Github.

Material Sidenav: Custom Collapse/Expand behaviour


You may have used Angular Material Sidenav component in your projects before, however, it just vanishes when collapsed by default. What if you want the Sidenav to be visible with side links having icons only when collapsed? Off-course, you can have 2 Sidenavs (one with icons + text and another with icons only) and show/hide them conditionally. But by design, Angular Material Sidenav does not let you have 2 Sidenavs in the same position.

So here is how I’ve devised a clever way to do away with the said limitation.

You can watch the video below👇 to see it in action or grab the source code from Github.

You might not need Angular Flex-Layout


Reader, I’m extremely sorry for using the clickbait-y headline for this post. I did not mean to talk ill about anything since this is not a rant 😷

Angular Flex-Layout library is really useful even today to quickly sprinkle CSS Flexbox or Grid layout declaratively in Angular applications. And it does save time in comparison with writing layout CSS by hand in various Angular components repetitively. So not using it in Angular applications is not an option unless a far better alternative with the same declarative ability is available. So I’m looking for,

  1. Declarative without steep learning curve.
  2. Cost effective over Network.
  3. Customisable enough to be an alter ego of Angular Flex-Layout

Turns out TailwindCSS hits a home run on these front. It is equally declarative for being a utility-first CSS framework packed with all kinds of CSS classes baked in. It reduces the main bundle size in Angular applications by a huge margin – the larger the applications, the better the gain (a sample application I built has seen ~40% dip in main bundle size without gzip). Watch the video for the proof 👇

Demonstration of advantages of TailwindCSS over Angular Flex Layout

If you are convinced by the demo above then you may find this comparison handy while moving away from Angular Flex-Layout in your application too.

Angular Flex-Layout directivesTailwind Equivalent CSS classes
fxLayout=”<direction> <wrap>”Use flex flex-<direction> flex-<wrap> classes
fxLayoutAlign=”<main-axis> <cross-axis>”Use justify-<main-axis>
items-<cross-axis> content-<cross-axis> classes
fxLayoutGap=”% | px | vw | vh”Use margin-right=”% | px | vw | vh” on flex items
fxFlex=”” | px | % | vw | vh | <grow> | <shrink> <basis>Use flex, grow, and shrink classes along with custom
flex-basis classes overriding *tailwind config*
fxFlexOrder=”int”Use Order classes along with custom classes
overriding *tailwind config*
fxFlexOffset=”% | px | vw | vh”Use Margin classes
fxFlexAlign=”start | baseline | center | end”Use Align Self classes
fxFlexFill and fxFillUse w-full and h-full classes
fxShow and fxHideUse block and hidden classes
Responsive APIOverride Tailwind’s default breakpoints to match
with Angular Flex-Layout. For example,

theme: {
extend: {
screens: {
‘sm’: { ‘min’: ‘600px’, ‘max’: ‘959px’ },
‘lt-sm’: { ‘max’: ‘599px’ }
….
},
}
}
[ngClass.xs]=”{‘first’:false, ‘third’:true}”Use aforementioned responsive API
class=”gt-xs:first xs:third”
[ngStyle.sm]=”{‘font-size’: ’16px’}”Use custom CSS classes as mentioned above.
<img src=”a.ico” src.xs=”b.ico”/>Use custom CSS classes as mentioned above.
gdAreas.xs=”header header | sidebar content”Tailwind does not have a support for
grid-template-areas so we have to use grid columns
and then apply Grid Row Start/End or
Grid Column Start/End classes on grid items.
For example, xs:grid-cols-2 xs:grid-rows-2
gdAlignColumns=”<main-axis> <cross-axis>”Use Align Items and Align Content classes
gdAlignRows=”<main-axis> <cross-axis>”Use Place Content and Place Items classes
gdAutoUse Grid Auto Flow classes
gdColumnsUse Grid Template Columns classes along with custom
classes overriding *tailwind config*
gdRowsUse Grid Template Rows classes along with custom
classes overriding *tailwind config*
gdGapUse Gap classes along with custom classes
overriding *tailwind config*
Angular Flex Layout vs Tailwind comparison

Unlike TailwindCSS, Angular Flex-Layout also has Media Observer which enables applications to listen for media query activations. I think, Breakpoint Observer from Angular CDK can compensate for it.

Wrap-up

Alright, that’s me. You can find the source code of the aforementioned sample application on https://github.com/codef0rmer/you-might-not-need-angular-flex-layout

Do let me know in the comment if you agree with the proposed solution or If I miss anything to cover?

🎅 Happy Xmas and Happy New Year 2021 🎅

Update 1

I have tried to integrate tailwind in Angular application using Angular Material and found a few issues with CSS specificity. Meaning in certain cases, the tailwind CSS classes do not get applied as expected and to obviate the issue, I had to apply !important on handful of tailwind CSS classes. The approach was tweeted last time:

WTH is Injection Token in Angular


Angular is a very advanced and thought-out framework where Angular Components, Directives, Services, and Routes are just the tip of the iceberg. So my intention with WTH-series of posts is to understand something that I do not know yet in Angular (and its brethren) and teach others as well.

Let us start with a simple example, an Angular Injectable Service that we all know and use extensively in any Angular project. This particular example is also very common in Angular projects where we often maintain environment-specific configurations.

// environment.service.ts
import { Injectable } from '@angular/core';

@Injectable({providedIn: 'root'})
export class Environment1 {
    production: boolean = false;
}

Here, we annotated a standard ES6 class with @Injectable decorator and forced Angular to provide it in the application root, making it a singleton service i.e. a single instance will be shared across the application. Then, we can use a Typescript constructor shorthand syntax to inject the above service in Component’s constructor as follows.

// app.component.ts
import { Component } from '@angular/core';

import { Environment1 } from './environment.service'; 

@Component({
    selector: 'my-app',
    templateUrl: './app.component.html'
})
export class AppComponent  {
    constructor(private environment1: Environment1) {}
}

But, often such environment-specific configurations are in the form of POJOs and not the ES6 class.

// environment.service.ts
export interface Environment {
    production: boolean;
}
export const Environment2: Environment = {
    production: false
};

So the Typescript constructor shorthand syntax will not be useful in this case. However, we can naively just store the POJO in a class property and use it in the template.

// app.component.ts
import { Component } from '@angular/core';

import { Environment, Environment2 } from './environment.service'; 

@Component({
    selector: 'my-app'
    templateUrl: './app.component.html'
})
export class AppComponent  {
    environment2: Environment = Environment2;
}

This will work, no doubt!? But, this defeats the whole purpose of Dependency Injection (DI) in Angular which helps us to mock the dependencies seamlessly while testing.

InjectionToken

That’s why Angular provides a mechanism to create an injection token for POJOs to make them injectable.

Creating InjectionToken

Creating an Injection Token is pretty straight-forward. First, describe your injection token and then set the scope with providedIn (just like the Injectable service that we saw earlier) followed by a factory function that will be evaluated upon injecting the generated token in the component.

Here, we are creating an injection token ENVIRONMENT for the Environment2 POJO.

// injection.tokens.ts
import { InjectionToken } from '@angular/core';

import { Environment, Environment2 } from './environment.service'; 

export const ENVIRONMENT = new InjectionToken<Environment>(
  'environment',
  {
    providedIn: 'root',
    factory: () => Environment2
  }
);

Feel free to remove providedIn in case you do not want a singleton instance of the token.

Injecting InjectionToken

Now that we have the Injection Token available, all we need is inject it in our component. For that, we can use @Inject() decorator which simply injects the token from the currently active injectors.

// app.component.ts
import { Component, Inject } from '@angular/core';

import { Environment } from './environment.service'; 
import { ENVIRONMENT } from './injection.tokens';

@Component({
  selector: 'my-app'
  templateUrl: './app.component.html'
})
export class AppComponent  {
  constructor(@Inject(ENVIRONMENT) private environment2: Environment) {}
}

Additionally, you can also provide the Injection Token in @NgModule and get rid of the providedIn and the factory function while creating a new InjectionToken, if that suits you.

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent } from './app.component';
import { Environment2 } from './environment.service'; 
import { ENVIRONMENT } from './injection.tokens';

@NgModule({
    imports:      [ BrowserModule ],
    declarations: [ AppComponent ],
    bootstrap:    [ AppComponent ],
    providers: [{
        provide: ENVIRONMENT,
        useValue: Environment2
    }]
})
export class AppModule { }

Demo

https://stackblitz.com/edit/angular-2fqggh

That’s it for all. Keep Rocking \m/ \0/

If you found this article useful in anyway, feel free to donate me and receive my dilettante painting as a token of appreciation for your donation.

Enable tree-shaking in Rails/Webpacker: A Sequel


A month ago, I wrote a blog post explaining a hacky way to enable tree-shaking in Rails/Webpacker project at Simpl. I would definitely recommend skimming through the previous post if you have not already.

In this post, we will directly jump into a more robust and stable solution. But before that, I want to resurrect my old memories for you that haunted me for months wherein a broken manifest.json was generated during webpack compilation at a random place. This time, after upgrading @rails/webpacker and related webpack plugins, the problem has been escalated beyond repair wherein an incomplete but valid manifest.json was generated randomly having fewer pack entries than expected. So even the generated manifest.json has little chance of succor by the hacky NodeJS fix_manifest.js script I had written to fix the broken JSON last time.

After a bit of googling my way out, I learned that webpack, with multi-compiler configurations, compiles each webpack configuration asynchronously and disorderly. Which is why I was getting an invalid manifest.json earlier.

Imagine two webpack compilations running simultaneously and writing to the same manifest.json at the same time:

{
  "b.js": "/packs/b-b8a5b1d3c0c842052d48.js",
  "b.js.map": "/packs/b-b8a5b1d3c0c842052d48.js.map"
}  "a.js": "/packs/a-a3ea1bc1eb2b3544520a.js",
  "a.js.map": "/packs/a-a3ea1bc1eb2b3544520a.js.map"
}

Using different manifest file for each pack

Yes, this is the robust and stable solution I came up with. First, you have to override Manifest fileName in every webpack configuration in order to generate a separate Manifest file for each pack such as manifest-0.json, manifest-1.json, and so on. Then, use the same NodeJS script fix_manifest.js with a slight modification to concatenate all the generated files into a final manifest.json which will be accurate (having all the desired entries) and valid (JSON).

For that, we have to modify the existing generateMultiWebpackConfig method (in ./config/webpack/environment.js) in order to remove the existing clutter of disabling/enabling writeToEmit flag in Manifest which we no longer need. Instead, we will create a deep copy of the original webpack configuration and override the Manifest plugin opts for each entry. The deep copying is mandatory so that a unique Manifest fileName can endure for each pack file.

const { environment } = require('@rails/webpacker')
const cloneDeep = require('lodash.clonedeep')

environment.generateMultiWebpackConfig = function(env) {
  let webpackConfig = env.toWebpackConfig()
  // extract entries to map later in order to generate separate 
  // webpack configuration for each entry.
  // P.S. extremely important step for tree-shaking
  let entries = Object.keys(webpackConfig.entry)

  // Finally, map over extracted entries to generate a deep copy of
  // Webpack configuration for each entry to override Manifest fileName
  return entries.map((entryName, i) => {
    let deepClonedConfig = cloneDeep(webpackConfig)
    deepClonedConfig.plugins.forEach((plugin, j) => {
      // A check for Manifest Plugin
      if (plugin.opts && plugin.opts.fileName) {
        deepClonedConfig.plugins[j].opts.fileName = `manifest-${i}.json`
      }
    })
    return Object.assign(
      {},
      deepClonedConfig,
      { entry: { [entryName] : webpackConfig.entry[entryName] } }
    )
  })
}

Finally, we will update the ./config/webpack/fix_manifest.js NodeJS script to concatenate all the generated Manifest files into a single manifest.json file.

const fs = require('fs')

let manifestJSON = {}
fs.readdirSync('./public/packs/')
  .filter((fileName) => fileName.indexOf('manifest-') === 0)
  .forEach(fileName => {
    manifestJSON = Object.assign(
      manifestJSON,
      JSON.parse(fs.readFileSync(`./public/packs/${fileName}`, 'utf8'))
    )
})

fs.writeFileSync('./public/packs/manifest.json', JSON.stringify(manifestJSON))

Wrap up

Please note that the compilation of a huge number of JS/TS entries takes a lot of time and CPU, hence it is recommended to use this approach only in a Production environment. Additionally, set max_old_space_size to handle the out-of-memory issue for production compilation as per your need – using 8000MB i.e. 8GB in here.

$ node --max_old_space_size=8000 node_modules/.bin/webpack --config config/webpack/production.js
$ node config/webpack/fix_manifest.js

Always run those commands one after the other to generate fit and fine manifest.json 😙

If you found this article useful in anyway, feel free to donate me and receive my dilettante painting as a token of appreciation for your donation.

RTFC


I really could not think of a better title for this post because it is not just about using an @Input property setter instead of the life-cycle hook ngAfterViewInit. Hence the title is pretty much inspired from RTFM where “Manual” is replaced by “Code”.

It’s about how important it is to read the code.

Just read the code..!

Last month I had published the Angular blog post on NgConf Medium in which I had proposed various ways to use jQuery plugins in Angular. If you have not read it yet, do read it here and comment if any. Unfortunately, I did not get lucky enough to be ngChampions (Kudos to those who become) and hence I have decided to publish the sequel here on my personal blog.

So after publishing the first post, I went on reading the source code for Material Badge component, just casually.

And to my surprise, I noticed 3 profound things:

Structural Directive over Component

It depends on the functionality you want to build into the component. If all you want to do is alter a single DOM then always go for a custom structural directive instead of writing a custom component. Because the custom component mostly introduces its own APIs unnecessarily.

For example, take a look at the app-toolbar-legends component from the last article. Remember, I’m not contradicting myself in this article, however, for this particular jQuery plugin in Angular, we could safely create an Angular Directive rather than having the Angular Component with its own API in terms of class and icon attributes below.

<app-toolbar-legends class="btn-toolbar-success" icon="fa-bitcoin" [toolbarConfig]="{position: 'right'}">
  <div class="toolbar-icons hidden">
    <a href="#"><i class="fa fa-bitcoin"></i></a>
    <a href="#"><i class="fa fa-eur"></i></a>
    <a href="#"><i class="fa fa-cny"></i></a>
  </div>
</app-toolbar-legends>
<app-toolbar-legends class="btn-toolbar-dark" icon="fa-apple" [toolbarConfig]="{position: 'right', style: 'primary', animation: 'flip'}">
  <div class="toolbar-icons hidden">
    <a href="#"><i class="fa fa-android"></i></a>
    <a href="#"><i class="fa fa-apple"></i></a>
    <a href="#"><i class="fa fa-twitter"></i></a>
  </div>
</app-toolbar-legends>

That means we can simplify the usage of the jQuery plugin in Angular by slapping the Angular Directive on the existing markup as follows. There is no need for an extraneous understanding of where class or icon values go in the component template, it’s pretty clear and concise in here. Easy, just slap a directive appToolbarLegends along with the jQuery plugin configurations.

<div class="btn-toolbar btn-toolbar-success" [appToolbarLegends]="{position: 'right'}" >
  <i class="fa fa-bitcoin"></i>
</div>
<div class="toolbar-icons hidden">
  <a href="#"><i class="fa fa-bitcoin"></i></a>
  <a href="#"><i class="fa fa-eur"></i></a>
  <a href="#"><i class="fa fa-cny"></i></a>
</div>


<div class="btn-toolbar btn-toolbar-dark" [appToolbarLegends]="{position: 'right', style: 'primary', animation: 'flip'}">
  <i class="fa fa-apple"></i>
</div>
<div class="toolbar-icons hidden">
  <a href="#"><i class="fa fa-android"></i></a>
  <a href="#"><i class="fa fa-apple"></i></a>
  <a href="#"><i class="fa fa-twitter"></i></a>
</div>

Generate Unique Id for the DOM

I wanted a unique id attribute for each instance of the toolbar in order to map them to their respective toolbar buttons. I’m still laughing at myself for going above and beyond just to generate a unique ID with 0 dependencies. Finally, StackOverflow came to the rescue 😅

Math.random().toString(36).substr(2, 9)

But while reading the source code for Material Badge component, I found an elegant approach that I wish to frame on the wall someday 😂. This will generate a unique _contentId for each instance of the directive without much fuss.

import { Directive } from '@angular/core';
let nextId = 0;
@Directive({
  selector: '[appToolbarLegends]'
})
export class LegendsDirective {
  private _contentId: string = `toolbar_${nextId++}`;
}

@Input property setter vs ngAfterViewInit

Before we get into the getter/setter, let’s understand when and why to use ngAfterViewInit. It’s fairly easy to understand — it is a life cycle hook that triggers when the View of the component or the directive attached to is initialized after all of its bindings are evaluated. That means if you are not concerned with querying the DOM or DOM attributes which have interpolation bindings on them, you can simply use Class Setter method as a substitute.

import { Directive, Input } from '@angular/core';
let nextId = 0;
@Directive({
  selector: '[appToolbarLegends]'
})
export class LegendsDirective {
  private _contentId: string = `toolbar_${nextId++}`;
  @Input('appToolbarLegends')
  set config(toolbarConfig: object) {
    console.log(toolbarConfig); // logs {position: "right"} object
  }
}

The Class Setters are called way before ngAfterViewInit or ngOnInit and hence they speed up the directive instantiation, slightly. Also, unlike ngAfterViewInit or ngOnInit , the Class Setters are called every time the new value is about to be set, giving us the benefit of destroying/recreating the plugin with new configurations.

Demo Day

Thanks for coming this far. So the moral of the story is to do read code written by others, does not matter which open source project it is.

Just read the code..!

https://stackblitz.com/edit/angular-zgi4er?embed=1&file=src/app/app.component.ts
If you found this article useful in anyway, feel free to donate me and receive my dilettante painting as a token of appreciation for your donation.

TypeScript: Generocks


It’s been a while that this blog post about Typescript Generics was in my drafts (not sure why) and finally it is out. 🤞But before we get into Generics, we should be aware of what makes Types in Typescript exciting.

Typescript allows us to type check the code so that errors can be caught during compile time, instead of run time. For example, the following code in Javascript may look correct at compile time but will throw an error in a browser or NodeJS environment.

const life = 42;
life = 24;  // OK at compile time

In this case, Typescript may infer the type of the variable life based on its value 42 and notify about the error in the editor/terminal. Additionally, you can specify the correct type explicitly if needed:

const life: number = 42;
life = 24; // Throws an error at compile time

Named Types

So there are few primitive types in Typescript such as number, string, object, array, boolean, any, and void (apart from undefined, null, never, and recently unknown). However, these are not enough, especially when we use them together in a large project and we might need a sort of an umbrella type or a custom type for them to hold together and be reusable. Such aliases are called named types which can be used to create custom types. They are classes, interfaces, enums, and type aliases.

For example, we can create a custom type, MyType comprising a few primitive types as follows.

interface MyType {
  foo: string;
  bar: number;
  baz: boolean;
}

But what if we want foo to be either string or object or array!? One way is to copy the existing interface to MyType2 (and so on).

interface MyType2 {
  foo: Array<string>;
  bar: number;
  baz: boolean;
}

The way we pass any random value to a function with the help of a function parameter to make the function reusable, what if we allowed to do the same for MyType as well. With this approach, the duplication of code will not be needed while handling the same set but different types of data.  But before we dive into it, let us first understand the problem with more clarity. And to understand it, we can write a cache function to cache some random string values.

function cache(key: string, value: string): string {
  (<any>window).cacheList = (<any>window).cacheList || {};
  (<any>window).cacheList[key] = value;
  return value;
}

Because of the strict type checking, we are forcing the parameter value to be of type String. But what if someone wants to use a numeric value? I’ll give you a hint: what operator in Javascript do we use for a fallback value if the expected value of a variable is Falsy? Correct! we use || a.k.a. logical OR operator. Now imagine for a second that you are the creator of Typescript Language (Sorry, Anders Hejlsberg) and willing to resolve this issue for all developers. So you might go for a similar solution to have a fallback type and after countless hours of brainstorming, end up using a bitwise operator i.e. | in this case (FYI, that thing is called union types in alternate dimensions where Anders Hejlsberg is still the creator of Typescript).

function cache(key: string, value: string | number): string | number {
  (<any>window).cacheList = (<any>window).cacheList || {};
  (<any>window).cacheList[key] = value;
  return value;
}

Is not that amazing? Wait, but what if someone wants to cache boolean values or arrays/objects of custom types!? Since the list is never ending, looks like our current solution is not scalable at all. Would not it be great to control these types from outside!? I mean, how about we allow to define placeholder types inside the above implementation and provide real types from the call site instead.

Generic Function

Let us use ValueType (or use any other placeholder or simply T to suit your needs) as a placeholder wherever needed.

function cache<ValueType>(key: string, value: ValueType): ValueType {
  (<any>window).cacheList = (<any>window).cacheList || {};
  (<any>window).cacheList[key] = value;
  return value;
}

We can even pass the custom type parameter MyType to the cache method in order to type check the value for correctness (try changing bar‘s value to be non-numeric and see for yourself).

cache<MyType>("bar", { foo: "foo", bar: 42, baz: true });

This mechanism of parameterizing types is called Generics. This, in fact, is a generic function.

Generic Classes

Similar to the generic function, we can also create generic classes using the same syntax. Here we have created a wrapper class CacheManager to hold the previously defined cache method and the global (<any>window).cacheList variable as a private property cacheList.

class CacheManager<ValueType> {
  private cacheList: { [key: string]: ValueType } = {};
  cache(key: string, value: ValueType): ValueType {
    this.cacheList[key] = value;
    return value;
  }
}
new CacheManager<MyType>().cache("bar", { foo: "bar", bar: 42, baz: true });

Even though the above code is perfect to encourage reusability of CacheManager while caching all types of values, but someday in future, there will be a need to provide varying types of data in MyType‘s properties as well. That exposed us to the original problem of MyType vs MyType2 (from the Named Types section above). To prevent us from duplicating the custom type MyType to accommodate varying types of properties, Typescript allows using generic types, even with interfaces, which makes them Generic Interfaces. In fact, we are not restricted to use only one parameter type below, use as many as needed. Additionally, we can have union types as a fallback to the provided parameter types. This permits us to pass an empty {} object while using the generic interface wherever we desire the default types of values.

interface MyType<FooType, BarType, BazType> {
  foo: FooType | string;
  bar: BarType | number;
  baz: BazType | boolean;
}
new CacheManager<MyType<{},{},{}>>().cache("bar", { foo: "bar", bar: 42, baz: true });
new CacheManager<MyType<number, string, Array<number>>>().cache("bar", { foo: 42, bar: "bar", baz: [0, 1] })

I know that it looks a bit odd to pass {} wherever the default parameter types are used. However, this has been resolved in Typescript 2.3 by allowing us to provide default type arguments so that passing {} in the parameter types will be optional.

Generic Constraints

When we implicitly or explicitly use a type for anything such as the key parameter of the cache method above, we are purposefully constraining the type of key to be of type String. But we are inadvertently allowing any sort of string here. What if we want to constraint the type of key to be the type of one of the interface properties of MyType only!? One naive way to do so is to use String Literal Types for key below and in return get [ts] Argument of type '"qux"' is not assignable to parameter of type '"foo" | "bar" | "baz"'error when the key type qux does not belong to MyType. This works correctly, however, because of the hard-coded string literal types, we can not replace MyType with YourType interface since it has different properties associated with it as follows.

interface YourType {
  qux: boolean;
}
class CacheManager<ValueType> {
  private cacheList: { [key: string]: ValueType } = {};
  cache(key: "foo" | "bar" | "baz", value: ValueType): ValueType {
    this.cacheList[key] = value;
    return value;
  }
}
new CacheManager<MyType<{},{},{}>>().cache("foo", { foo: "bar", bar: 42, baz: true }); // works
new CacheManager<MyType<{},{},{}>>().cache("qux", { foo: "bar", bar: 42, baz: true }); // !works
new CacheManager<YourType>().cache("qux", { qux: true }); // !works but should have worked

To make it work, we’ve to manually update the string literal types used for key each time in the class implementation. However, similar to Classes, Typescript also allows us to extend Generic Types. So that we can get away with the string literal types "foo" | "bar" | "baz" of key with the generic constraint i.e. KeyType extends keyof ValueType as follows. Here, we are forcing the type-checker to validate the type of key to be one of the interface properties provided in CacheManager. This way we can even serve previously mentioned YourType without any error.

class CacheManager<ValueType> {
  private cacheList: { [key: string]: ValueType } = {};
  cache<KeyType extends keyof ValueType>(key: KeyType, value: ValueType): ValueType {
    this.cacheList[key] = value;
    return value;
  }
}
new CacheManager<MyType<{},{},{}>>().cache("foo", { foo: "bar", bar: 42, baz: true }); // works
new CacheManager<MyType<{},{},{}>>().cache("qux", { foo: "bar", bar: 42, baz: true }); // !works
new CacheManager<YourType>().cache("qux", { qux: true }); // works

Alright, that brings us to the climax of Generics. I hope \0/ Generocks  \m/ for you.

If you found this article useful in anyway, feel free to donate me and receive my dilettante painting as a token of appreciation for your donation.

Designing Multi-Tenant applications


In the enterprise world, it is often mandatory for any web applications to support multi-tenancy. Multi-tenancy is a software architecture and approach where a single instance of a software application serves multiple customers or tenants. Each tenant is a separate entity with its own data, configurations, and user management, but they all share the same underlying software infrastructure. In other words, the software is designed to provide a single application instance that can be customised and configured for different organisations or users.

The multi-tenant application must allow varying static content such as customer’s logo/backgrounds, branding, CMS, 3rd party analytical SDKs in terms of Google Analytics, Freshdesk, Hotjar, etc. For that, it is often a recommended practice to keep all the static assets per customer in a dedicated folder and then have a separate project configurations in Angular CLI as shown below. This way you can build the application for each tenant one at a time, using the matched configurations.

However, there are several drawbacks to the aforementioned approach:

  1. You are rebuilding the same application multiple times just because of different static assets required by each customer.
  2. You are exponentially increasing the time taken by the CI/CD pipeline to build the same application for all the customers over and over, especially when the customers grow.
  3. You are able to run the application for one customer at a time, making it very time-consuming to switch between them locally.

To tackle these problems, I’m proposing a new approach to the multi-tenancy.

Multi-tenancy the modern way

In the new approach, you can build the application only once regardless of customers, keeping the CI/CD pipeline at O(1). You can even access the same application for multiple customers locally at the same time, increasing the productivity of developers at par. Let’s see how.

The major loophole in the existing approach is the static assets. Once you move away the static assets from the build-time and load them at run-time, you can win half the battle. But how do we load the correct static assets needed for a particular customer 🤔? The answer lies in the application url.

Map static assets to Application URL

In order to map the application url to the respective static assets, we could create folders under /assets directory. Each of these directories will have customer’s logo, styles, profile.json, and 3rd party SDKs such as Freshdesk. The profile.json file may contain customer’s name and other metadata related to authentication or feature flags and that’s the first thing the application will fetch.

Here, when you access https://client1.myapp.com in the browser, we can read the hostname via window.location.hostname to find the appropriate static assets folder and query the profile.json file to apply customer specific feature flags and branding at run-time.

Override Application hostname

This is a crucial bit to avoid maintaining static assets under aforementioned localhost directory and instead point elsewhere. For that, we create localhost/override-hostname.js to override the application hostname and include the script tag in index.html.

window.overrideHostname = 'client1.myapp.com';

This allows you to switch between different customers without restarting the application. For example, changing client1.myapp.com to client2.myapp.com, the application would load client2’s static assets at run-time.

Emit styles for all customers

In Angular/Nx CLI, you can compile multiple SASS files without injecting them into index.html.

Then dynamically inject the appropriate styles at bootstrapping as follows:

bootstrapApplication(AppComponent, appConfig).then(() => {
  var style = document.createElement('link');
  style.rel = 'stylesheet';
  style.href = style.href = `${window.location.hostname === 'localhost'
    ? (<any>window).overrideHostname || window.location.hostname
    : window.location.hostname}.css`;
  document.head.appendChild(style);
});

Swap profiles using Requestly

Requestly is a browser extension that allows us to Intercept & Modify HTTP Requests which is going to play a major role in resolving the 3rd drawback (i.e. able to run the application for one customer at a time) we had earlier. So to resolve it, you must define 3 rules in Requestly:

1. Cancel http://localhost:5200/assets/localhost/override-hostname.js

2. Insert overrideHostname script

3. Change http://localhost:5200/assets/localhost/profile.json’s response.

Now without the above rules, it renders the application for client1 – loading its logo and applying its branding. But with the rules in place, it renders the application for client2 correctly.

Wrap up

I hope people will find it useful. You can find the working code examples at https://github.com/codef0rmerz/multi-tenant-application

Fantastic Beasts: The Crimes of Pythonworld


When I decided to make my foray into the Pythonic World, I stumbled upon the sorcery between system level Python2.7 and Python3. What are pip and pip3? I used to install some Python packages using pip and if not worked, used pip3. Either of them always worked 🙅‍♂️. I did not know what I was doing apart from just getting the system up and running until I determined to see how deep the rabbit hole goes. But the rabbit hole was not that deep, it was my confused mind that made it deep until now…

pip vs pip3

As you have already guessed that Python3 is a predecessor of Python2. In order to maintain the backward compatibility of the package manager pip for Python2, Python3 came up with its own package manager under the name of pip3. However, we can point python and pip commands directly to python3 and pip3 executables respectively (which we will see in the later sections), so that we do not have to deal with python3 or pip3 commands while running a python script or installing any python package.

The upshot is that pip by default points to the system level Python 2.7 and pip3 points to whatever version we have for Python3.

⇒  pip --version
pip 19.0 from (python 2.7)
⇒  pip3 --version
pip 18.1 from (python 3.7)

To naively create an alias for python and pip commands to point to python3 and pip3, we can add following in our bash/zsh file and reload the shell to take its effect.

alias python=python3
alias pip=pip3
# BEFORE
⇒  python --version
Python 2.7.15
⇒  pip --version
pip 19.0 from (python 2.7)

# AFTER
source ~/.zshrc
⇒  python --version
Python 3.7.1
⇒  pip --version
pip 18.1 from (python 3.7)

This approach works, however, we constantly have to edit the bash/zsh file to switch between two or more versions of Python. Clearly, we can do better.

Introducing pyenv

Pyenv allows us to install and switch between multiple versions of Python.

pyenv versions

We can check what versions of Python are installed on our system with the following command. The * in the beginning represents the current Python version (System Python 2.7 in this case) the python and pip commands point to.

⇒  pyenv versions
* system
  3.6.3

pyenv install <version>

We can install any version of Python using the following install command. Note that the installation does not switch to the installed python version implicitly.

⇒  pyenv install 3.7.2
⇒  pyenv versions
* system
  3.6.3
  3.7.2

pyenv shell <version>

To manually switch to any Python version (only in the current shell), we can use this particular command. That means, killing the shell window would restore the Python version to the system level one. Here we have switched to Python 3.7.2 in the current shell.

⇒  pyenv shell 3.7.2
⇒  pyenv versions
  system
  3.6.3
* 3.7.2

Introducing pyenv-virtualenv

Now that we have fixed the problem of maintaining different versions of Python to be used in various Python Projects. The different but somewhat similar problem persists for Python packages too.

For example, imagine we have two Python projects running on top of Python 3.7.2 but using different versions of Django, 2.1.5 (latest) and 1.9. So installing both one after the other using pip install Django==2.1.5 and pip install Django==1.9 commands would override the 2.1.5 version with the 1.9 one. Hence, both projects inadvertently would end up using the same Django version which we do not want. That’s where Python Virtual Environments help.

There are many Python packages out there to manage our virtual environments and some of them are virtualenv, virtualenvwrapper, etc. Although, either is better or worse than others in some way. However, we are going to use pyenv-virtualenv which is a pyenv plugin using virtualenv under the hood.

pyenv virtualenvs

Similar to pyenv versions, this command shows us a list of virtual environments we have on our system. Below I have one virtualenv venv already created for Python 3.6.3.

⇒  pyenv virtualenvs
  3.6.3/envs/venv
  venv

pyenv virtualenv <environment-name>

Let’s create a virtual environment for Python 3.7.2. Now we can see the two virtual environments created but none of them are activated yet.

⇒  pyenv virtualenv venv-3.7.2
⇒  pyenv virtualenvs
  3.6.3/envs/venv
  3.7.2/envs/venv-3.7.2
  venv 
  venv-3.7.2 

pyenv activate <environment-name>

Let’s activate the virtual environment venv-3.7.2. The * in the beginning represents the activated virtual environment where Django will be installed.

⇒  pyenv activate venv-3.7.2
⇒  pyenv virtualenvs
  3.6.3/envs/venv 
  3.7.2/envs/venv-3.7.2 
  venv 
* venv-3.7.2 

First, we can confirm if Django is installed in the activated virtual environment. If not, we will install Django 1.9.

# BEFORE
⇒  pip list --format=columns
Package    Version
---------- -------
pip        19.0.1
setuptools 28.8.0

# AFTER
⇒  pip install Django==1.9
⇒  pip list --format=columns
Package    Version
---------- -------
Django     1.9
pip        19.0.1
setuptools 28.8.0

So far so good. Now we must verify whether we got the isolation for packages using pyenv-virtualenv that we wanted.

pyenv deactivate

To check that we can deactivate the current virtual environment. This command will restore Python to system level one. And pip list will now show all the global Python packages installed on our system. Notice that Django is not installed anymore since we got out of the venv-3.7.2 virtual environment.

⇒  pyenv deactivate
⇒  pyenv virtualenvs
  3.6.3/envs/venv 
  3.7.2/envs/venv-3.7.2 
  venv 
  venv-3.7.2 
⇒  pip list --format=columns
Package    Version
---------- -------
pip        9.0.1
setuptools 28.8.0
airbrake               2.1.0
aniso8601              4.1.0
arrow                  0.10.0
asn1crypto             0.24.0
attrs                  18.2.0
bcrypt                 3.1.6
bitarray               0.8.3
boto                   2.49.0
boto3                  1.9.83
.
.
.

Wrap up

As of now, pyenv and pyenv-virtualenv are serving me well. I hope that things will be stable going forward too. 🤟

If you found this article useful in anyway, feel free to donate me and receive my dilettante painting as a token of appreciation for your donation.

Managing CRON with Ansible


Setting up a CRON job manually is a child’s play so why am I writing about it? The reason I’m writing about it because of two main reasons:

  1. My experience of setting it up with Ansible
  2. Common mistakes I made which others can avoid

CRON

CRON daemon is a long-running process that executes commands at specific dates and times. This makes it easy to schedule activities like sending bulk emails. The CRON relies on a crontab file that has a list of scheduled commands to run. So we can manually add/edit/remove scheduled commands directly in the crontab file but this may induce bugs, especially when the list is too long. Ansible helps in deploying such CRON jobs effortlessly.

Ansible

Ansible is an IT automation and orchestration engine. It uses YAML file syntax for us to write such automation called Plays and the file itself is referred to as a Playbook. Playbooks contain Plays. Plays contain Tasks. And Tasks run pre-installed modules sequentially and trigger optional handlers.

Similarly, I have used a CRON module in Ansible to set up a task which configures a CRON job as follows:

- name: Run CRON job every day at 11.05 PM UTC
  become: yes
  become_method: sudo
  cron:
    name: "apache_restart"
    user: "root"
    hour: 23
    minute: 5
    job: "/bin/sh /usr/sbin/apache_restart.sh"
    state: present

Imagine there is a fictional CRON job to run Apache2 at the specified time every day – god knows why? But I made unfair mistakes initially while setting it up. Let us go step by step into each of those mistakes:

become and become_method

These flags are only necessary while running the job with sudo or any other privilege escalation method. In this case, I wanted to run /bin/sh /usr/sbin/apache_restart.sh command with sudo and I wished not to expect a password prompt that we usually get while running such commands manually. So the become flag prevents the password prompt.

In the beginning, I had forgotten to add these flags preventing the CRON job from executing the bash apache_restart.sh file as expected.

cron module

Ansible lets us use the pre-installed CRON module so that it will be far easy to setup CRON jobs. Although, by mistake, I had made CRON module an Ansible task as mentioned below.

- cron:
    name: "apache_restart"
    user: "root"
    hour: 23
    minute: 5
    job: "/bin/sh /usr/sbin/apache_restart.sh"
    state: present

As we learned before, only Tasks can run pre-installed modules. So Ansible instantly threw an error while deploying and I managed to save my face 🤦‍♂️

cron name

I thought that since I had already named the task, naming the CRON module will not be necessary. But I was embarrassed more than wrong. Because each time you deploy any changes to Ansible, without a CRON name, it will set up a new CRON job leaving the previous one as is. So I was literally restarting Apache2 thrice at a time. Remember, the CRON name works as a unique key to identify if any CRON job is already set up with the same name. If not, it will set up a brand new CRON job in the crontab file. Otherwise, override the existing one with new configurations.

state

The default state of the CRON job is present. Although to disable a particular CRON job, you change the state to absent and redeploy it via Ansible. I was using the state present without the CRON name that was creating multiple crontab entries on each deployment.

job

The job key takes the actual command that you want to run at a specific time/date. But make to use absolute command paths for brevity.

Wrap up

I also use tail -f /var/log/syslog and grep CRON /var/log/syslog commands to check the logs to make sure that CRON actually runs the bash file I specified.

If you found this article useful in anyway, feel free to donate me and receive my dilettante painting as a token of appreciation for your donation.