Taswar Bhatti
The synonyms of software simplicity
Azure functions

Was playing around with Azure functions and thought I would write a quick blog post on how to use Azure functions to read a text file and send an email out to me daily. Basically I went with a joke a day idea. So lets get started with writing some of the awesome code 🙂

Warning Prerequisites Assumed
If you don’t have an Azure subscription, you can always create a free account before you begin.
Azure function
Login into your Azure account and click the New button found on the upper left-hand corner of the Azure portal, then select Compute > Function App. You should see a create button for function app.

Function App

I have named my app to zeytinmail since it was available, also used a new Resource Group, you can use an existing one.
Hosting plan is set to consumption plan and since I am in East Coast I have set up location in East US.
For storage I am just using a auto generated one, I could use an existing one but for simplicity sake just using a default one that was auto generated.

Function App Create

Now that we have it created we should see the overview of our function app.
Azure Function Overview

We can now click on function and create a Timer function with CSharp.
Timer Function

We should see the run.csx file open up with the Run method pre-populated.

We will then upload the jokes.txt file onto the server. I got my jokes.txt from https://github.com/rdegges/yomomma-api/blob/master/jokes.txt. I just clicked on Add Button and created the file and copy and pasted the text into the file.

From there in order to read the file in Azure Function all I have to do is put the path of the file in my code.

File Path
You can find out the path by using Kudu in Azure function, usually they are stored in D drive with D:\home\site\wwwroot\{functionName}\file

Your solution would look something like below:
Timer Code

Next up we need to create yourself a SendGrid account.
Sendgrid has free account to send email out.

We need to set up our Application Setting to have our SendGrid API Key. We can go into the Application Setting and Add a new key, lets call it SendGridKey
sendgrid key

We need to go into the Integrate section of the function where we want a new Output for our function. We will be selecting the SendGrid output to send an email out.
SendGrid Out

We can then put in our Api Key App Settings like below and from address if we want.
SendGrid ApiKey

We now need to modify our function.json file to add SendGrid information, add this inside the binding array as another section.

Finally we need to write the code to send the email out.

With this we will be able to send the email to a user with a joke of the day.

Last but not least we need to also set a schedule by clicking on Integrate and set the time for the schedule to send an email out. If you want to learn more about cron timer you can visit this site to learn more about it. https://codehollow.com/2017/02/azure-functions-time-trigger-cron-cheat-sheet/
Azure Timer Schedule Azure Timer Schedule


Here it is how to send an email out with a joke with SendGrid and Azure Functions. Have fun 😛

elastic search

For the past year I have been evaluating and working and even presented ElasticSearch, and I thought it would be good to showcase a series of article on ElasticSearch for .NET Developers. What it brings to the table when developing a software solution. I also did a talk on ElasticSearch at Montreal DevTeach, if you are interested in my slides feel free to view them on slideshare or my blog.

Without further adieu, lets get started and lets look at what ElasticSearch really is.

First off, ElasticSearch some consider it as ELK Stack but for new branding they have been trying to call themselves Elastic Stack rather, although the ELK has been stuck with many people and google searches, but we from here on we will call it just Elastic Stack.

So what does the Elastic Stack consist of you may wonder?
Basically the Elastic Stack consist of ElasticSearch, Logstash and Kibana. Lets go through them individually so that we can understand what each component does and brings to a software solution.




This is the core main search engine or store that you use for storing your data, it is build in Java. It stores documents in json format and uses Lucene to index it, elastic search provides and builds metadata upon the index that was created by Lucene (Note: Lucene is build in Java, there is also a port of Lucene to .NET called NLucene)

Some people may think that ElasticSearch is a database that we store data into like mysql, postgres or mssql, but I would say Elastic is not really a database since there is no db file and does not have relationships like SQL. Its more like a NOSQL solution but not quite like mongodb either. The best thing to describe it, I would say is think of it as a Search Engine where you store documents in. I know its confusing at first but don’t worry it will come clear later or once you start playing around with it.




Logstash is another module/component/service. You can use logstash without using ElasticSearch, the main functionality of Logstash is to get some input, filter it and output it somewhere, again the output does not need to be ElasticSearch but usually it is. An example of logstash could be I have IIS logs or Apache Logs I need to input them into logstash, and I would like to geo tag each of the IP address and store them into ElasticSearch or some database. Main idea of Logstash is (INPUT -> FILTER -> OUTPUT) simple. One more thing to note is Logstash is build with JRuby on the JVM and there are tones of open source plugins for Logstash that one can download, even to anonymized the data or encrypt etc before outputting the data.




Kibana is the graphical user interface for ElasticSearch, it is used analyzing your data and for creating charts from ElasticSearch data. It is quite powerful, one can slice and dice many kinds of charts using Kibana.
Kibana is build with node.js and its a single page app (SPA) application.




Beats are basically light weight shippers of data. There are many types of beats, eg. filebeat is used for shipping file data (e.g apache.log) to ElasticSearch or Logstash. Winlogbeat allows one to ship windows events to ElasticSearch or Logstash, check out the beats offered by Elastic; you can also write your own beat using the ibbeat library, and not to mention that beats are actually written in GoLang. If you are interested in using Golang with VSCode check out the channel 9 video I did for golang and vscode.

So here we sum up the main components of Elastic Stack, I will go through each component individually in upcoming blog post, going through install process to configuration.


I had the opportunity to speak at satazureday Azure Saturday here in Ottawa last week, and went through the topic of Azure Key Vault. I also had a co-presenter to share the talk with; an upcoming public speaker Petrica Mihai. He created most of the slides and the demo code in C# 🙂
You can view the code at https://github.com/mihaipetri/AzureKeyVaultNet

In any case if you are interested here are the slides on Azure Key Vault.

And the transcript:

  1. Azure Key Vault • What are we trying to solve with KeyVault?
    • Let’s step back and look at a Cloud Design Pattern
    • External Configuration Pattern
  2. External Configuration Pattern
  3. Typical Application
  4. Storing Configuration in file
  5. Multiple application
  6. External Configuration Pattern
    • Helps move configuration information out of the application deployment
    • his pattern can provide for easier management and control of configuration data
    • For sharing configuration data across applications and other application instances
  7. Problems
    • Configuration becomes part of deployment
    • Multiple applications share the same configuration
    • Hard to have access control over the configuration
  8. External Configuration Pattern
  9. When to use the pattern
    • When you have shared configuration, multiple application
    • You want to manage configuration centrally by DevOps
    • Provide audit for each configuration
  10. When not to use
    • When you only have a single application there is no need to use this pattern it will make things more complex
  11. Cloud Solution Offerings
    • Azure KeyVault (Today’sTalk)
    • Vault by Hashicorp
    • AWS KMS
    • Keywhiz
  12. What is Azure Key Vault ?
    • Safe1guard cryptographic keys and secrets used by cloud applications and services
    • Use hardware security modules (HSMs)
    • Simplify and automate tasks for SSL/TLS certificates
  13. Gemalto / SafeNet – Hardware Security Module
  14. How Azure Key Vault can help you ?
    • Customers can import their own keys into Azure, and manage them
    • Keys are stored in a vault and invoked by URI when needed
    • KeyVault performs cryptographic operations on behalf of the application
    • The application does not see the customers’ keys
    • KeyVault is designed so that Microsoft does not see or extract your keys • Near real-time logging of key usage
  15. Bring Your Own Key (BYOK)
  16. Create a Key Vault New-AzureRmKeyVault -VaultName ‘MihaiKeyVault’ -ResourceGroupName ‘MihaiResourceGroup’ -Location ‘Canada East’
  17. Objects, identifiers, and versioning
    • Objects stored in Azure KeyVault (keys, secrets, certificates) retain versions whenever a new instance of an object is created, and each version has a unique identifier and URL
    • https://{keyvault-name}.vault.azure.net/{object-type}/{object- name}/{object-version}
  18. Azure Key Vault keys
    • Cryptographic keys in Azure KeyVault are represented as JSONWeb Key [JWK] objects
    • RSA: A 2048-bit RSA key.This is a “soft” key, which is processed in software by KeyVault but is stored encrypted at rest using a system key that is in an HSM
    • RSA-HSM: An RSA key that is processed in an HSM
    • https://myvault.vault.azure.net/keys/mykey/abcdea84815e4ca8bc19c f8eb943ee88
  19. Create a Key Vault key $key = Add-AzureKeyVaultKey -VaultName ‘MihaiKeyVault’ -Name ‘MihaiFirstKey’ -Destination ‘Software’
  20. Azure Key Vault secrets
    • Secrets are octet sequences with a maximum size of 25k bytes each
    • The Azure KeyVault service does not provide any semantics for secrets; it accepts the data, encrypts and stores it, returning a secret identifier, “id”, that may be used to retrieve the secret
    • https://myvault.vault.azure.net/secrets/mysecret/abcdea54614e4ca7 ge14cf2eb943ab23
    • Create a Key Vault secret $secret = Set-AzureKeyVaultSecret -VaultName ‘MihaiKeyVault’ -Name ‘SQLPassword’ -SecretValue $secretvalue
    • Azure Key Vault certificates
      • Import/generate existing certificates, self-signed or Enroll from Public Certificate Authority (DigiCert, GlobalSign andWoSign)
      • When a KeyVault certificate is created, an addressable key and secret are also created with the same name
      • https://myvault.vault.azure.net/certificates/mycertificate/abcdea848 15e4ca8bc19cf8eb943bb45
    • Create a Key Vault certificate
    • Secure your Key Vault
      • Access to a key vault is controlled through two separate interfaces: management plane and data plane
      • Authentication establishes the identity of the caller
      • Authorization determines what operations the caller is allowed to perform
      • For authentication both management plane and data plane use Azure Active Directory
      • For authorization, management plane uses role-based access control (RBAC) while data plane uses key vault access policy
    • Access Control
      • Access Control based on Azure AD
      • Access assigned at theVault level
      • – permissions to keys
      • – permissions to secrets
      • Authentication against AzureAD
      • – application ID and key
      • – application ID and certificate
    • Azure Managed Service Identity (MSI)
      • Manage the credentials that need to be in your code for authenticating to cloud services
      • Azure KeyVault provides a way to securely store credentials and other keys and secrets, but your code needs to authenticate to Key Vault to retrieve them
      • Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD)
      • You can use this identity to authenticate to any service that supports AzureAD authentication, including KeyVault, without having any credentials in your code

      Azure Key Vault Logging

      • Monitor how and when your key vaults are accessed, and by whom
      • Save information in an Azure storage account that you provide
      • Use standard Azure access control methods to secure your logs by restricting who can access them
      • Delete logs that you no longer want to keep in your storage account
    • Azure Key Vault Pricing • Operations (Standard or Premium) $0.030 per 10000 operations
      • Advanced Operations (Standard or Premium) $0.150 per 10000 operations
      • Certificate Renewals (Standard or Premium) $3.00 per renewal
      • Hardware Security Module Protected Keys (Premium only) $1.00 per key
    • Azure Key Vault DEMO
      • Create KeyVault, Secrets, Keys and Certificates
      • Create AzureAD Application
      • Consuming Secrets and Keys https://azurekeyvaultnet.azurewebsites.net – live demo
      • https://github.com/mihaipetri/AzureKeyVaultNet – demo code
Taswar Bhatti - Cloud Design Patterns

This week I gave a talk on Cloud Design Patterns at the Ottawa .NET Community. I wanted to share the sides here and will most likely write on articles on the topic using real world examples. Samples in C# and node.js, in AWS and Azure.

For the talk I went through, what Cloud Design Patterns are and mainly focused on the patterns below without using any platform specifications. (i.e cloud agnostics)

  • The External Configuration Pattern
  • The Cache Aside Pattern
  • The Federated Identity Pattern
  • The Valet Key Pattern
  • The Gatekeeper Pattern
  • The Circuit Breaker Pattern

For each of them I also went through when you should use the pattern and when not to use it, I also provided Cloud Solutions Offering that one can use to implement the pattern.

Enjoy the slides 🙂

The Transcript:

  1. Agenda
    • What are Patterns?
    • The External Configuration Pattern
    • The Cache Aside Pattern
    • The Federated Identity Pattern
    • The Valet Key Pattern
    • The Gatekeeper Pattern
    • The Circuit Breaker Pattern
    • The Retry Pattern
    • The Strangler Pattern
  2. What are Patterns?
    • General reusable solution to a recurring problem
    • A template on how to solve a problem
    • Best practices
    • Patterns allow developers communicate with each other in well known and understand names for software interactions.
  3. External Configuration Pattern
    • Helps move configuration information out of the application deployment
    • This pattern can provide for easier management and control of configuration data
    • For sharing configuration data across applications and other application instances
  4. Typical Application
  5. Storing Configuration in file
  6. Multiple application
  7. Problems
    • Configuration becomes part of deployment
    • Multiple applications share the same configuration
    • Hard to have access control over the configuration
  8. External Configuration Pattern
  9. When to use the pattern
    • When you have shared configuration, multiple application
    • You want to manage configuration centrally by DevOps
    • Provide audit for each configuration
  10. When not to use
    • When you only have a single application there is no need to use this pattern it will make things more complex
  11. Cloud Solution Offerings
    • Azure Key Vault
    • Vault by Hashicorp
    • AWS KMS
    • Keywhiz
  12. Cache Aside Pattern
    • Load data on demand into a cache from datastore
    • Helps improve performance
    • Helps in maintain consistency between data held in the cache and data in the underlying data store.
  13. Typical Application
  14. Cache Aside Pattern
  15. When to use the pattern
    • Resource demand is unpredictable.
    • This pattern enables applications to load data on demand
    • It makes no assumptions about which data an application will require in advance
  16. When not to use
    • Don’t use it for data that changes very often
    • Things to consider
      • Sometimes data can be changed from outside process
      • Have an expiry for the data in cache
      • When update of data, invalidate the cache before updating the data in database
      • Pre populate the data if possible
    • Cloud Offerings
      • Redis (Azure and AWS)
      • Memcache
      • Hazelcast
      • Elastic Cache (AWS)
    • Federated Identity Pattern
      • Delegate authentication to an external identity provider.
      • Simplify development, minimize the requirement for user administration
      • Improve the user experience of the application
      • Centralized providing MFA for user authentication
    • Typical Application
    • Problem
      • Complex development and maintenance (Duplicated code)
      • MFA is not an easy thing
      • User administration is a pain with access control
      • Hard to keep system secure
      • No single sign on (SSO) everyone needs to login again to different systems
    • Federated Identity Pattern
    • When to use
      • When you have multiple applications and want to provide SSO for applications
      • Federated identity with multiple partners
      • Federated identity in SAAS application
    • When not to use it
      • You already have a single application and have custom code
      • that allows you to login
    • Things to consider
      • The identity Server needs to be highly available
      • Single point of failure, must have HA
      • RBAC, identity server usually does not have authorization information
      • Claims and scope within the security auth token
    • Cloud Offerings
      • Azure AD
      • Gemalto STA and SAS
      • Amazon IAM
      • GCP Cloud IAM
    • Valet Key Pattern
      • Use a token that provides clients with restricted direct access to a specific resource
      • Provide offload data transfer from the application
      • Minimize cost and maximize scalability and performance
    • Typical Application Client App Storage
    • Problem
    • Valet Key Pattern
    • Client App Generate Token Limited Time And Scope Storage
    • When to use it
      • The application has limited resources
      • To minimize operational cost
      • Many interaction with external resources (upload, download)
      • When the data is stored in a remote data store or a different datacenter
    • When not to use it
      • When you need to transform the data before upload or download
    • Cloud Offerings
      • Azure Blob Storage
      • Amazon S3
      • GCP Cloud Storage
    • Gatekeeper Pattern

      • Using a dedicated host instance that acts as a broker between clients and services
      • Protect applications and services
      • Validates and sanitizes requests, and passes requests and data between them
      • Provide an additional layer of security, and limit the attack surface of the system
    • Typical Application

    • Problem

    • Gatekeeper Pattern

    • When to use it

      • Sensitive information (Health care, Authentication)
      • Distributed System where perform request validation separately
    • When not to use

      • • Performance vs security
    • Things to consider

      • WAF should not hold any keys or sensitive information
      • Use a secure communication channel
      • Auto scale
      • Endpoint IP address (when scaling application does the WAF know the new applications)
    • Circuit Breaker Pattern

      • To handle faults that might take a variable amount of time to recover
      • When connecting to a remote service or resource
    • Typical Application

    • Problem

    • When to use it
      • To prevent an application from trying to invoke a remote service or access a shared resource if this operation is highly likely to fail
      • Better user experience
    • When not to use
      • Handling access to local private resources in an application, such as in-memory data structure
      • Creates an overhead
      • Not a substitute for handling exceptions in the business logic of your applications
    • Libraries
      • • Polly (http://www.thepollyproject.org/)
      • • Netflix (Hystrix) https://github.com/Netflix/Hystrix/wiki
    • Retry pattern
      • Enable an application to handle transient failures
      • When the applications tries to connect to a service or network resource
      • By transparently retrying a failed operation
    • Typical Application has Network Failure

    • Retry Pattern
      • • Retry after 2, 5 or 10 seconds
    • When to use it
      • Use retry for only transient failure that is more than likely to resolve themselves quicky
      • Match the retry policies with the application
      • Otherwise use the circuit break pattern
    • When not to use it
      • Don’t cause a chain reaction to all components
      • For internal exceptions caused by business logic
      • Log all retry attempts to the service
    • Libraries
      • Roll your own code
      • Polly (http://www.thepollyproject.org/)
      • Netflix (Hystrix) https://github.com/Netflix/Hystrix/wiki
    • Strangler Pattern

      • Incrementally migrate a legacy system
      • Gradually replacing specific pieces of functionality with new applications and services
      • Features from the legacy system are replaced by new system features eventually
      • Strangling the old system and allowing you to decommission it
    • Monolith Application
    • When to use
      • Gradually migrating a back-end application to a new architecture
    • When not to use
      • When requests to the back-end system cannot be intercepted
      • For smaller systems where the complexity of wholesale replacement is low
    • Considerations
      • Handle services and data stores that are potentially used by both new and legacy systems.
      • Make sure both can access these resources side-by-side
      • When migration is complete, the strangler façade will either go away or evolve into an adaptor for legacy clients
      • Make sure the façade doesn't become a single point of failure or a performance bottleneck.

If you are serious in running Redis, you will want to run it under HA mode (High Availability). So far for learning purpose you can run a single instance of redis under your docker environment quite easily, but what if you need to run it in production? This is where Redis Sentinel comes in to play, let’s see what the official document of Redis Sentinel have to say.

Redis Sentinel
Redis Sentinel provides high availability for Redis. In practical terms this means that using Sentinel you can create a Redis deployment that resists without human intervention to certain kind of failures.

The main thing that Redis Sentinel provides is when there is a failure on master node, it will automatically choose a salve and promote it to a master. How does it do that, it basically periodically check the health and live of a Redis instance, it will also notify clients and slaves about the new master. The protocol used is gossip protocol with leader election algorithms. Sentinel also acts as a central source of authority for client discovery, clients are connected to Sentinel for the address of the master node.

Things that Sentinel doesn’t do are mainly:

  1. Manage client connections
  2. Store configurations changes to disk

Usually you would want to run your Sentinel on different servers than your Redis Server, for the very simple fact that you don’t want your monitoring software on the same server right? Can you imagine a sql server with sql monitoring on the same machine to tell you if its alive? What if the sql server machine goes down, then both goes down? One can potentially run Sentinel on client nodes and its best to not to run it on master nodes.

Here is a typical way of setting up Sentinel.



Unfortunately Redis StackExchange does not support Sentinel there are pull request that have the functionality (https://github.com/StackExchange/StackExchange.Redis/pull/692) but have not been merged into Redis StackExchange yet as we speak.

What are the alternative? You can roll your own RedisClient and have Sentinel support or manage it yourself in your code, or use Service Stack is another option.
If you are interested in rolling your own you may want to go through this code by PaulB from Stack Overflow provided

The concepts remain the same as in ask sentinel for who the master is and then do the connection.

Note: The code is not mine its from the link above.

In the next post we will go through redis cluster, I wanted to talk about sentinel first such that readers tend to mix them up and by first explaining sentinel we can talk about clusters which are different concepts.


Redis GeoSpatial data sets are actually just SortedSets in Redis, there is no secret about it. Basically it provides an easy way to store geo spatial data like longitude/latitude coordinates into Redis.Lets look at some of the commands that Redis provides for Geo Spatial data.

Redis Geo Datatype – Operations

  • GEOADD: Adds or updates one or more members to a Geo Set O(log (N)) where N is the number of elements in the sorted set.
  • GEODIST: Return the distance between two members in the geo spatial index represented by the sorted set O(log(N)).
  • GEOHASH: Gets valid Geohash strings representing the position of one or more elements from the Geo Sets O(log(N)), where N is the number of elements in the sorted set.
  • GEOPOS: Return the longitude,latitude of all the specified members of the geo spatial sorted set at key O(log(N)), where N is the number of elements in the sorted set.
  • GEORADIUS: Return the members of a sorted set populated with geo spatial information using GEOADD, which are within the borders of the area specified with the center location and the maximum distance from the center (the radius) O(N+log(M))
  • GEORADIUSBYMEMBER: Same as GEORADIUS with the only difference that instead of taking, as the center of the area to query, it takes a longitude and latitude value O(N+log(M))

I wanted to use some open data to showcase the usage of Redis GeoSpatial data in Redis. I chose to use Basketball courts in Ottawa since my son plays a bit of basketball.
Here is a map of basketball courts in Ottawa.


Ottawa Basketball Courts

C# code using Redis Geo Set Datatype

So this covers the basic usage of Redish GeoSpatial Datatype, in the next blog post I will cover using Sentinel which provides high availability for Redis.

For the code please visit

For previous Redis topics

  1. Intro to Redis for .NET Developers
  2. Redis for .NET Developer – Connecting with C#
  3. Redis for .NET Developer – String Datatype
  4. Redis for .NET Developer – String Datatype part 2
  5. Redis for .NET Developer – Hash Datatype
  6. Redis for .NET Developer – List Datatype
  7. Redis for .NET Developer – Redis Sets Datatype
  8. Redis for .NET Developer – Redis Sorted Sets Datatype
  9. Redis for .NET Developer – Redis Hyperloglog
  10. Redis for .NET Developer – Redis Pub Sub
  11. Redis for .NET Developers – Redis Pipeline Batching
  12. Redis for .NET Developers – Redis Transactions
  13. Redis for .NET Developers – Lua Scripting
  14. Redis for .NET Developers – Redis running in Docker
  15. Redis for .NET Developers – Redis running in Azure
  16. Redis for .NET Developers – Redis running in AWS ElastiCache

Redis Lua Scripting

Redis provides a way to extend its functionality on the server side by providing support for Lua Scripting. If you are coming from a relational database world, you already know that you can use Stored Procedures to extend functionality of your relational database. Now, you may also know that some people do frown upon using stored procedures, I think one could also think of using scripting in Redis sort of belongs in the same category. Nevertheless its still good to know what you can do with Redis and Lua.

If you want to learn more Lua try this site http://tylerneylon.com/a/learn-lua/

In order to call Lua Script from Redis.StackExchange library one can use the LuaScript class or IServer.ScriptLoad(Async), IServer.ScriptExists(Async), IServer.ScriptFlush(Async), IDatabase.ScriptEvaluate, and IDatabaseAsync.ScriptEvaluateAsync methods.

Lets try to do something with the redis console first by using the redis-cli

As you can see we first load the script and we get back a SHA1 Hash from it. Redis basically stores the information in one of its mapping table and we can reuse the sha1 hash to call the script by using the EVALSHA command, which in this case gave us back “hello redis”

Remember: When Redis is running your Lua script, it will not run anything else because Redis is single threaded

C# code using Redis Lua Script

You may realize that every time we call the server we need to load the script and then execute on it, there is also another way that StackExchange Redis allows us to to avoid the overhead of transmitting the script text for every call. One can convert a LuaScript into a LoadedLuaScript like the code below:

You would cache the loaded value somewhere in your application, usually its best to load the scripts when you start your application

So this covers the basic usage of Redis LuaScript, in the next blog post I will cover how to use Geo spatial data in Redis.

For the code please visit

For previous Redis topics

  1. Intro to Redis for .NET Developers
  2. Redis for .NET Developer – Connecting with C#
  3. Redis for .NET Developer – String Datatype
  4. Redis for .NET Developer – String Datatype part 2
  5. Redis for .NET Developer – Hash Datatype
  6. Redis for .NET Developer – List Datatype
  7. Redis for .NET Developer – Redis Sets Datatype
  8. Redis for .NET Developer – Redis Sorted Sets Datatype
  9. Redis for .NET Developer – Redis Hyperloglog
  10. Redis for .NET Developer – Redis Pub Sub
  11. Redis for .NET Developers – Redis Pipeline Batching
  12. Redis for .NET Developers – Redis Transactions
oauth and openid_

Wanted to share my DevTeach Montreal 2017 talk where I talked about OAuth and OpenId Connect. The types of OAuth Grants, how to consume them, the flows in OAuth and what OpenId Connect comes into play, what does it solve.

Hope you like the presentation and if you are interested in more security topics, ping me and let me know what would you be interested in.


1. OAUTH2 & OPENID CONNECT DEMYSTIFIED Taswar Bhatti (Microsoft MVP) GEMALTO @taswarbhatti http://taswar.zeytinsoft.co m taswar@gmail.com
2. WHO AM I?? – 4 years Microsoft MVP – 17 years in software – Author of Instant Automapper (Packt) – Currently working at as System Architect at Enterprise Security Space (Gemalto) – You may not have heard of Gemalto but 1/3 of the world population uses Gemalto but they just dont know that
3. WHAT WE WILL COVER TODAY? OAuth 2.0 OAuth flows OpenID JWT (JavaScript Web Token) some says “jot” OpenID Connect Demo (Keycloak IDP)
4. WHAT IS OAUTH? An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications.
5. OAUTH HISTORY OAuth started circa 2007 2010 – RFC 5849 defines OAuth 1.0 2010 – OAuth 2.0 work begins in IETF Working deployments of various drafts & versions at Google, Microsoft, Facebook, Github , Twitter, Flickr, Dropbox … Mid 2012 – Lead author and editor resigned & withdraws his name from all specs (DRAMA……) October 2012 – RFC 6749, RFC 6750
6. THE GOOD OAuth 2.0 is easier to implement than OAuth 1.0 Wide spread and continue growing Shorted lived token Encapsulated Token OAuth2 makes it HTTP/JSON friendly to request and transmit tokens Takes “multiple client” architectures into account Clients can have varying trust levels
7. OAUTH 2.0 – Transport Security : Using HTTPS and TLS – Ease : Usable (no digital certs to verify) – Flexible : Mobile, Web SPA apps, etc – Decoupled: Resource server and authorization server – Bearer Token : Easy for integration; Id Token also known as keys 9/24/2017 7
8. SO I CAN USE MY PASSWORD??? 9/24/2017 8
9. OAUTH IS LIKE A VALET KEY – Provides another domain delegated access to your application server resources 9/24/2017 9
10. OAUTH ROLES 9/24/2017 10 User Application API
11. OAUTH ROLES 9/24/2017 11 User Application API
12. OAUTH MISCONCEPTION Ohh this is easy!! When I login to Spotify with Twitter, it grabs by username and password from Twitter…. Wrong !!!!!!!!!!!!!! 9/24/2017 12 Developer
13. OAUTH IS NOT FOR 9/24/2017 13 – Traditional Access Control – Not for authentication – Not for Federation – OAuth should be used for delegation
14. BEARER TOKEN GET /somedata HTTP/1.1 Host: someserver.com Authorization: Bearer a3b4c55cf The access token can be JWT format – A security token with the property that any party in possession of the token (a “bearer”) can use the token in any way that any other party in possession of it can 9/24/2017 14
15. OAUTH TERMINOLOGY – Client or Consumer Application : Is typically a web based or mobile application that wants to access User’s Protected Resources – Resource Server or the Resource Provider: Is a web site or web service API where the User keeps his/her protected data – Authorized Server : The server issuing access tokens to the client after successfully authenticating the resources and obtaining authorization – User or the Resource Owner : Is a member of the Resource Provider, wanting to share certain resources with a third party – Client Credentials : Are the consumer keys and consumer secret used to authenticated the Client – Tokens : are the access token generated by server after request from client
16. OAUTH TOKEN TYPES – Access Token : Used to directly access protected resources on behalf of a user or service – Refresh Token : When given to an authorization server, it will give you a new access token – Authorization Code Token : Use only in the authorization code grant type for access token or refresh token 9/24/2017 16
17. HIGH LEVEL FLOW OF OAUTH 2 – An app registers him/herself on an oauth service provider (lets say twitter) – S/he gets an app key/secret for each app that s/he registers – When users login they are redirected to the service provider to provide the credentials – If user approves then a token is issued to the app for a limited time – Finally the client uses the token to access the resource
18. OAUTH USAGE In OAuth [authorization] You are in BigPhotoPrintingCorp.net account and you need to access your images from AwesomeImage.com site BigPhotoPrintingCorp.net site will redirect you to AwesomeImage.com site You enter you credential to AwesomeImage.com site and authenticated your self. This is like in openId AwesomeImage.com site will ask if you want to give permission to access only photos of AwesomeImage.com site you select yes AwesomeImage.com site will redirect back to BigPhotoPrintingCorp.net site BigPhotoPrintingCorp.net can access AwesomeImage.com site
20. 4 TYPES OF OAUTH FLOW Authorization Code Grant : for apps running on a web server, long lived tokens Implicit Grant : For browser-based or mobile apps, during user is logged in, short lived tokens Resource Owner Credentials Grant : For logging in with a username and password, trusted application Client credentials Grant : for application access machine to machine
21. AUTHORIZATION CODE FOR APPS RUNNING ON A WEB SERVER This is the most common type of application you have when dealing with OAuth servers. Web apps are run on a server where the source code of the application is not available to the public. This case your site will REDIRECT you to particular authorization server. If webserver making multiple request it can use STATE parameter for map callback response with request One of the most complicated one in OAuth
22. YOU HAVE SEEN THIS BEFORE 9/24/2017 22
23. IMPLICIT FOR BROWSER-BASED OR MOBILE APPS Browser-based apps run entirely in the browser after getting source code from a web server. Since the entire source code is available to the public, they cannot maintain the confidentiality of their client secret, so the secret is not used in this case One will make api calls with the token that is assign to it For mobile apps also cannot maintain the confidentiality of their client secret. Because of this, mobile apps must also use an OAuth flow that does not require a client secret. With this concept token is exposed to local operating system. So there are no refresh tokens.
24. PASSWORD FOR LOGGING IN WITH A USERNAME AND PASSWORD OAuth 2 also provides a “password” grant type which can be used to exchange a username and password for an access token directly. This obviously requires the application to collect the user’s password. As a result users may hesitate to use this service unless this app comes from the auth service provider. Only used in highly trusted application, your social media Facebook app, rather than 3rd party apps (Batman Fancy Facebook app)
25. MEET THE ACTORS IN OUR OAUTH 9/24/2017 25 Resource Owner Or User Application Authorization Server Resource Server Or API
26. CLIENT CREDENTIALS FOR APPLICATION ACCESS There are scenarios that applications may wish to get statistics about the users of the app. In this case, applications need a way to get an access token for their own account, outside the context of any specific user. OAuth provides the client credentials grant type for this purpose. This is machine to machine communication sort of concept
27. 9/24/2017 27
28. TOKEN – CLIENT CREDENTIAL GRANT $ curl –XPOST https://api.mysite.com/oauth/token -d ‘grant_type=client_credentials’ -d ‘client_id=TestClient’ -d ‘client_secret=TestSecret’ 9/24/2017 28
29. TOKEN – CLIENT CREDENTIAL GRANT Response from Authorization Server { “access_token”:”03807cb390319329bdf6c777d4dfae9c0d3b3c35″, “expires_in”:3600, token_type”:”bearer”, “scope”:null } 9/24/2017 29
30. PASSWORD GRANT TYPE 9/24/2017 30
31. PASSWORD GRANT $ curl –XPOST https://api.mysite.com/oauth/token -d ‘client_id=TestClient’ -d ‘client_secret=TestSecret’ -d ‘grant_type=password’ -d ‘username=batman’ -d ‘password=nananananananannaBatman’ 9/24/2017 31
32. SCOPES AKA PERMISSIONS – Roles, Authority where you want to give access control to who can do what with it – The name of permissions – User scopes – Client/Applications Scopes – Token contains intersection 9/24/2017 32
33. SCOPES 9/24/2017 33 CarKey.Ignite
34. SCOPES 9/24/2017 34 CarKey.OpenTrunk CarKey.Ignite
35. SCOPES IN TOKEN Response from Authorization Server { “access_token”:”03807cb390319329bdf6c777d4dfae9c0d3b3c35″, “expires_in”:3600, token_type”:”bearer”, “scope”: “CarKey.Ignite” } 9/24/2017 35
37. AUTHORIZATION GRANT $ https://fancy.mysite.com/oidc #Reaching out to application, are you logged in? 302 HTTP Redirect https://api.mysite.com/authorize?response_type=code&client_id=Te stClient&redirect_uri=https://fancy.mysite.com/oidc 9/24/2017 37
38. AUTHORIZATION CODE GRANT GET /oauth/authorize #Login to the app SUCCESS you get back a code HTTP 302 redirect back to redirect_uri https://fancy.mysite.com/oidc?code=SplxlOBeZQQYbYS6WxSbIA&stat e=xyz 9/24/2017 38
39. AUTHORIZATION CODE GETTING THE TOKEN $ curl –XPOST https://api.mysite.com/oauth/token -d ‘client_id=TestClient’ -d ‘client_secret=TestSecret’ -d ‘grant_type=authorization_code’ -d ‘code=SplxlOBeZQQYbYS6WxSbIA’ 9/24/2017 39
40. ACCESS TOKEN Response from Authorization Server { “access_token”:”03807cb390319329bdf6c777d4dfae9c0d3b3c35″, “expires_in”:3600,“ token_type”:”bearer”, “scope”: “CarKey.Ignite” } 9/24/2017 40
41. RESOURCE SERVER CHECK TOKEN – If it is a Jwt token you can verify the key who signed it – Endpoint to check the token returning the scopes to verify if valid token 9/24/2017 41
42. IMPLICT GRANT TYPE – Used for clients that can easily be impersonated like phone or mobile application – 3rd party application – A simplified Authorization Code Grant with eliminating the code step – Access token is given directly to the app – No Refresh Token are given, Access token are short lived – Requires Resource Owner to invoke for new Access Token 9/24/2017 42
44. OPENID Sharing a single Identity with different consumers Decentralized OpenID is a form of Single Sign On (SSO) OpenID is a URL http://myname.myopenid.com
45. WHAT CAN YOU DO? One can claim and prove they own the openid Use it for authentication At a high level its like Microsoft Passport It’s a form of authentication, if you have a system you still will need to populate your fields (e.g firstname, email, etc) OpenId does not provide you with those information
46. OPENID USAGE In OpenId [authentication] You want to access your account on bigcorp.net bigcorp.net is asking your openId You entered your username for openId bigcorp.net will redirect you to the your openid providers site User give password to openId provider and authenticate him/her self openId provider will redirect user back to bigcorp.net site bigcorp.net will grant you to access your account
47. OPENID CONNECT We have talked about OAuth and OpenId and there is also OpenId Connect It’s the new SSO authentication for the internet OpenId Connect build on top of OAuth2 since sometimes you may just need authentication Remember OAuth2 is for authorization OpenID Connect provides Implict flows and Authorization code flow
49. OPENID CONNECT TOKEN { “sub” : “alice”, “user_name” : “Taswar” “iss” : “https://openid.c2id.com”, “aud” : “client-12345”, “auth_time” : 123456789, “iat” : 1311280970, “exp” : 1311281970, “email” : Taswar@gmail.com, “phone_number”: 123-4567 } 9/24/2017 49
50. OPENID CONNECT – HYBRID 9/24/2017 50
51. JWT (JAVA WEB TOKEN) JSON Web Token (JWT) is a compact URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JavaScript Object Notation (JSON) object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or MACed and/or encrypted.
52. JWT CONT JWT Token looks like this eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjEzODY4OTkxMzEsI mlzcyI6ImppcmE6MTU0ODk1OTUiLCJxc2giOiI4MDYzZmY0Y2ExZTQx ZGY3YmM5MGM4YWI2ZDBmNjIwN2Q0OTFjZjZkYWQ3YzY2ZWE3OTdi NDYxNGI3MTkyMmU5IiwiaWF0IjoxMzg2ODk4OTUxfQ.uKqU9dTB6gK wG6jQCuXYAiMNdfNRw98Hw_IWuA5MaMo Ok great…………. Once you understand the format, it’s actually pretty simple: .. [header].[payload].[signature] 53. JWT CONT In other words: You create a header object, with the JSON format. Then you encode it as a base64 You create a claims object, with the JSON format. Then you encode it in base64 You create a signature for the URI. Then you encode it in base64 You concatenate the three items, with the “.” separator
54. BENEFITS JSON Web Tokens work across different programming languages: JWTs work in .NET, Python, Node.js, Java, PHP, Ruby, Go, JavaScript, and Haskell. So you can see that these can be used in many different scenarios. JWTs are self-contained: They will carry all the information necessary within itself. This means that a JWT will be able to transmit basic information about itself, a payload (usually user information), and a signature. JWTs can be passed around easily: Since JWTs are self-contained, they are perfectly used inside an HTTP header when authenticating an API. You can also pass it through the URL.
55. HEADER The header carries 2 parts (JWT and the hashing algorithm like below) { “typ”: “JWT”. “algo”, “HS256” } Then base64 encode it
56. PAYLOAD & CLAIMS The payload will carry the bulk of our JWT, also called the JWT Claims. This is where we will put the information that we want to transmit and other information about our token. There are multiple claims that we can provide. This includes registered claim names, public claim names, and private claim names. { “iss”: “taswar.zeytinsoft.com”, “exp”: 1300819380, “name”: “Taswar Bhatti”, “admin”: true }
57. SIGNATURE The third and final part of our JSON Web Token is going to be the signature. This signature is made up of a hash of the following components: the header the payload Secret The secret is the signature held by the server. This is the way that our server will be able to verify existing tokens and sign new ones. var encodedString = base64UrlEncode(header) + “.” + base64UrlEncode(payload); HMACSHA256(encodedString, ‘secret’);
58. ENDPOINTS OF OPENID CONNECT Authorization Endpoint (Regular OAuth) Identity Endpoint (username/pass, hardware token, biometrics) UserInfo Endpoint (name, birthday, picture, etc) Optionals (Session Endpoint, WebFinger etc)
59. OPENID CONNECT TOKENS The OpenID Connect server provides client applications with two key tokens: ID token – asserts the users identity in a signed and verifiable way. Access token – provides access to the user’s details at the UserInfo endpoint and other protected web APIs.
60. DEMO
61. THANK YOU Questions? Contact: Taswar@gmail.com Blog: http://Taswar.zeytinsoft.com Twitter: @taswarbhatti And a special thanks to Lego Batman !

elastic search

Wanted to share my DevTeach talk slides on Elastic Search. Where I went into introducing the Elastic Stack. Consisting of Elastic Search, Logstash and Kibana. I also went into the constraints that we had and the design approaches that we took.

Hope you enjoy and expect more ElasticSearch blogs this year 🙂


1. STORE 2 MILLION OF AUDIT LOGS A DAY INTO ELASTICSEARCH Taswar Bhatti (Microsoft MVP) GEMALTO @taswarbhatti http://taswar.zeytinsoft.co m taswar@gmail.com
2. WHO AM I? – 4 years Microsoft MVP – 17 years in software industry – Currently working as System Architect in Enterprise Security Space (Gemalto) – You may not have heard of Gemalto but 1/3 of the world population uses Gemalto they just dont know it – Gemalto has stacks build in many environnent .NET, Java, Node, Lua, Python, mobile (Android, IOS), ebanking etc 9/22/2017 2
3. AGENDA – Problem we had and wanted to solve with Elastic Stack – Intro to Elastic Stack (Ecosystem) – Logstash – Kibana – Beats – Elastic Search flows designs that we have considered – Future plans of using Elastic Search 9/22/2017 3
4. QUESTION & POLL – How many of you are using Elastic or some other logging solution? – How do you normally log? Where do you log? – Do you log in Relational Database? 9/22/2017 4
5. HOW DO YOU TROUBLESHOOT OR FIND YOUR BUGS – Typically in a distributed environment one has to go through the logs to find out where the issue is – Could be multiple systems that you have to go through which machine/server generated the log or monitoring multiple logs – Even monitor firewall logs to find traffic routing through which data center – Chuck Norris never troubleshoot; the trouble kills themselves when they see him coming 9/22/2017 5
6. 9/22/2017 6
7. OUR PROBLEM – We had distributed systems (microservices) that would generate many different types of logs, in different data centers – We also had authentication audit logs that had to be secure and stored for 1 year – We generate around 2 millions records of audit logs a day, 4TB with replications – We need to generate reports out of our data for customers – We were still using Monolith Solution in some core parts of the application – Growing pains of a successful application – We want to use a centralized scalable logging system for all our9/22/2017 7
9. A LITTLE HISTORY OF ELASTICSEARCH – Shay Banon created Compass in 2004 – Released Elastic Search 1.0 in 2010 – ElasticSearch the company was formed in 2012 – Shay wife is still waiting for her receipe app 9/22/2017 9
10. 9/22/2017 10
11. ELASTIC STACK 9/22/2017 11
12. ELASTICSEARCH – Written in Java backed by Lucene – Schema free, REST & JSON based document store – Search Engine – Distributed, Horizontally Scalable – No database storage, storage is Lucene – Apache 2.0 License 9/22/2017 12
14. ELASTICSEARCH INDICES – Elastic organizes document in indices – Lucene writes and maintains the index files – ElasticSearch writes and maintains metadata on top of Lucene – Example: field mappings, index settings and other cluster metadata 9/22/2017 14
15. DATABASE VS ELASTIC 9/22/2017 15
16. ELASTIC CONCEPTS – Cluster : A cluster is a collection of one or more nodes (servers) – Node : A node is a single server that is part of your cluster, stores your data, and participates in the cluster’s indexing and search capabilities – Index : An index is a collection of documents that have somewhat similar characteristics. (e.g Product, Customer, etc) – Type : Within an index, you can define one or more types. A type is a logical category/partition of your index. – Document : A document is a basic unit of information that can be indexed – Shard/Replica: Index divided into multiple pieces called shards, replicas are copy of your shards9/22/2017 16
17. ELASTIC NODES – Master Node : which controls the cluster – Data Node : Data nodes hold data and perform data related operations such as CRUD, search, and aggregations. – Ingest Node : Ingest nodes are able to apply an ingest pipeline to a document in order to transform and enrich the document before indexing – Coordinating Node : only route requests, handle the search reduce phase, and distribute bulk indexing. 9/22/2017 17
21. SHARD SEARCH AND INDEX 9/22/2017 21
22. DEMO OF ELASTICSEARCH 9/22/2017 22
23. LOGSTASH – Ruby application runs under JRuby on the JVM – Collects, parse, enrich data – Horizontally scalable – Apache 2.0 License – Large amount of public plugins written by Community https://github.com/logstash- plugins 9/22/2017 23
25. 9/22/2017 25
26. LOGSTASH INPUT 9/22/2017 26
27. LOGSTASH FILTER 9/22/2017 27
28. LOGSTASH OUTPUT 9/22/2017 28
29. DEMO LOGSTASH 9/22/2017 29
30. BEATS 9/22/2017 30
31. BEATS – Lightweight shippers written in Golang (Non JVM shops can use them) – They follow unix philosophy; do one specific thing, and do it well – Filebeat : Logfile (think of it tail –f on steroids) – Metricbeat : CPU, Memory (like top), redis, mongodb usage – Packetbeat : Wireshark uses libpcap, monitoring packet http etc – Winlogbeat : Windows event logs to elastic – Dockbeat : Monitoring docker – Large community lots of other beats offered as opensource 9/22/2017 31
32. 9/22/2017 32
33. FILEBEAT 9/22/2017 33
34. X-PACK – Elastic commercial offering (This is one of the ways they make money) – X-Pack is an Elastic Stack extension that bundles – Security (https to elastic, password to access Kibana) – Alerting – Monitoring – Reporting – Graph capabilities – Machine Learning 9/22/2017 34
35. 9/22/2017 35
36. KIBANA – Visual Application for Elastic Search (JS, Angular, D3) – Powerful frontend for dashboard for visualizing index information from elastic search – Historical data to form charts, graphs etc – Realtime search for index information 9/22/2017 36
37. 9/22/2017 37
38. DEMO KIBANA 9/22/2017 38
39. DESIGNS WE WENT THROUGH – We started with simple design to measure throughput – One instance of logstash and one instance of ElasticSearch with filebeat 9/22/2017 39
40. DOTNET CORE APP – We used a dotnetcore application to generate logs – Serilog to generate into json format and stored on file – Filebeat was installed on the linux machine to ship the logs to logstash 9/22/2017 40
41. PERFORMANCE ELASTIC – 250 logs item per second for 30 minutes 9/22/2017 41
42. OVERVIEW 9/22/2017 42
43. LOGSTASH 9/22/2017 43
44. ELASTIC SEARCH RUN TWO – 1000 logs per second, run for 30 minutes 9/22/2017 44
45. PERFORMANCE 9/22/2017 45
46. OTHER DESIGNS 9/22/2017 46
48. CONSIDERATIONS OF DATA – Index by day make sense in some cases – In other you may want to index by size rather (Black Friday more traffic than other days) when Shards are not balance ElasticSearch doesn’t like that – Don’t index everything, if you are not going to search on specific fields mark them as text 9/22/2017 48
49. FUTURE CONSIDERATIONS – Investigate into Elastic Search Machine learning – ElasticSearch with Kafka for cross data center replication 9/22/2017 49
50. THANK YOU & OPEN TO QUESTIONS – Questions??? – Contact: Taswar@gmail.com – Blog: http://Taswar.zeytinsoft.com – Twitter: @taswarbhatti – LinkedIn (find me and add me)

Taswar Bhatti Talk on MS Bot Framework

In May 2017 I did a talk in Ottawa .NET User Group on Introduction to Microsoft Bot Framework, it was a interesting turnout and lots of conversation on what a bot can do for a business and how to use them.

Below you will find the slides for my talk on Microsoft Bot Framework. The sample demo code can be find on github https://github.com/Microsoft/BotBuilder-Samples, where we demo searching of Real Estate, image search, etc.

Ottawa IT Meetup Community: https://www.meetup.com/ottawaitcommunity/events/235920172/

If you are interested in more on bot framework and like to see more articles on it, please let me know 🙂