Azure Development Archives - Netwoven https://netwoven.com/category/azure-development/ Netwoven Inc. Sat, 20 May 2023 13:42:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://netwoven.com/wp-content/uploads/2023/07/cropped-favicon-32x32.png Azure Development Archives - Netwoven https://netwoven.com/category/azure-development/ 32 32 A Guide to Successfully Integrating Azure DevOps with Power Apps https://netwoven.com/azure-development/a-guide-to-successfully-integrating-azure-devops-with-power-apps/ https://netwoven.com/azure-development/a-guide-to-successfully-integrating-azure-devops-with-power-apps/#respond Wed, 24 Aug 2022 14:38:15 +0000 https://www.netwoven.com/?p=41757 Introduction: With the unprecedented and ever-growing global demand for Digital Transformation, an effective way to mitigate the global shortage of developers is by leveraging low-code platforms such as Power Apps.… Continue reading A Guide to Successfully Integrating Azure DevOps with Power Apps

The post A Guide to Successfully Integrating Azure DevOps with Power Apps appeared first on Netwoven.

]]>
Introduction:

With the unprecedented and ever-growing global demand for Digital Transformation, an effective way to mitigate the global shortage of developers is by leveraging low-code platforms such as Power Apps. As a best practice, you can plan and automate Power Apps builds, in parallel to the low code development. Azure DevOps Pipeline is a robust mechanism to implement CI/CD (Continuous Integration/Continuous Delivery) to continuously build, test, and deploy Power Apps. 

Power Apps Application Lifecycle Management (ALM)   

Application lifecycle management (ALM) includes areas such as plan and track, develop, build, and test, deploy and operate, and monitor and learn. 

Power Apps Solutions are used to transport apps and components from one environment to another, like from a dev environment to a build or user environment. In other words, solutions are the mechanism for implementing ALM in Power Apps.  

A solution can contain one or more apps as well as other components such as site maps, tables, processes, web resources, flows, and more. A component represents an artifact used in your application and something that you can potentially customize.  

Dataverse stores all the artifacts, including solutions. To use the Power Platform ALM features and tools, your environment must include a Dataverse database:  https://docs.microsoft.com/en-us/power-platform/alm/overview-alm 

Power Apps Continuous Integration/Continuous Delivery (CI/CD) 

ALM addresses app development, as well as many other tasks including continuous integration and continuous delivery (CI/CD).  

As a CI/CD platform, Azure DevOps allows you to automate your build, test, and deployment process. 

Source control can be used to store Power Apps source code, as discussed below, and collaborate on your components. 

A Guide to Successfully Integrating Azure DevOps with Power Apps

Power Apps DevOps 

Azure DevOps platform provides automation tools to enable continual Power Apps delivery and better value for customers.  

Azure DevOps Pipeline is a mechanism to implement CI/CD to continuously build, test, and deploy Power Apps. 

Microsoft group PowerCAT has uploaded a video on YouTube to show how Power Apps Solutions can be used in conjunction with Azure DevOps:  https://www.youtube.com/watch?v=xwCUJmrRI9E 

Unmanaged solutions are to be used in the Power Apps dev environment. Managed solutions are used to deploy Power Apps to any user environment like UAT or Production.  

Dev environment solutions can be exported as unmanaged (.zip file) and then unpacked so that the extracted files can be checked into a source control system. In a nutshell, unmanaged solutions should be considered your source. 

Managed solutions can be generated by a build server and considered as a build artifact.  

A Guide to Successfully Integrating Azure DevOps with Power Apps
Diagram – Power Apps CI/CD automated jobs

Power Apps Pipelines 

Login to Azure DevOps. Power Apps CI/CD pipelines or automated jobs can be created as follows:   

Job001

Create a pipeline to export and unpack a Power Apps unmanaged solution, and then store the extracted files into source control. 

  • Search marketplace with keywords: Power Platform 
  • Add step: Power Platform Tool Installer that initializes everything so that you can use the subsequent steps. 
  • Add step: Power Platform Publish Customizations.
    • It requires a service connection to your environment.  
  • Add step: Power Platform Export Solution.
    • Enter service connection, source solution name, and solution output file.
    • Specify what you want to export as an unmanaged solution.  
  • Add step: Power Platform Checker to verify before things are checked into source control.
    • It will stop if there are errors.  
    • Enter the required fields and choose Rule Set – Solution Checker. 
  • Add step: Power Platform Unpack Solution.
    • Enter required fields such as input solution zip file, target folder, and type of solution. 
  • Add step: Command Line Script to check the extracted files into source control.
    • For instance, you can specify a GitHub script to automate this step. 

Save, queue, and run Job001. Verify that the expected Power Apps files (master copy) appear in your source control repository, provided the checker reports no errors. 

Job002

Create a pipeline to build a Power Apps managed solution by adding necessary steps.

  • Add step: Power Platform Tool Installer. 
  • Add step: Power Platform Pack Solution.
    • You are packing the previously checked files from source control. 
  • Add step: Power Platform Import Solution.
    • You are importing it into your build server. 
  • Add step: Power Platform Export Solution.
    • You are exporting it as a managed solution.
  • Build Artifact
    • You can track all managed solutions produced. 

Save and run Job002. Verify that it has created a Power Apps managed solution (zip file). 

Job003

Create a release pipeline to deploy a Power Apps managed solution to your user environment. Add necessary steps. Build on and use the previous pipeline’s output. 

Save and run Job003. Verify that it has been deployed to your user environment (UAT, Production). 

Summary:

Power Apps development and deployment can be managed using source control and automated CI/CD pipelines. Unmanaged solutions are for development only. Managed solutions should be deployed to user environments. Power Apps build and other automation does not have to wait. It can start with Power Apps customizations or code development. 

We hope you found this blog useful in learning about managing Power Apps development and deployment using source control and automated CI/CD pipelines. Please reach out to us so that we can put our years of Microsoft Power Apps build and deployment experience and capabilities to work for your organization’s Digital Transformation. 

The post A Guide to Successfully Integrating Azure DevOps with Power Apps appeared first on Netwoven.

]]>
https://netwoven.com/azure-development/a-guide-to-successfully-integrating-azure-devops-with-power-apps/feed/ 0
An Introduction to Azure Fabric Reliable Services https://netwoven.com/azure-development/azure-service-fabric-reliable-services-an-introduction-to-azure-service-fabric/ https://netwoven.com/azure-development/azure-service-fabric-reliable-services-an-introduction-to-azure-service-fabric/#respond Thu, 04 Aug 2022 13:43:28 +0000 https://www.netwoven.com/?p=41580 Introduction: Azure Service Fabric is an open-source project, and it powers core Azure infrastructure as well as other Microsoft services such as Skype for Business, Intune, Azure Event Hubs, Azure… Continue reading An Introduction to Azure Fabric Reliable Services

The post An Introduction to Azure Fabric Reliable Services appeared first on Netwoven.

]]>
Introduction:

Azure Service Fabric is an open-source project, and it powers core Azure infrastructure as well as other Microsoft services such as Skype for Business, Intune, Azure Event Hubs, Azure Data Factory, Azure Cosmos DB, Azure SQL Database, Dynamics 365, and Cortana. Azure Service Fabric is a distributed systems platform for deploying and managing scalable and reliable microservices and containers. Service Fabric can be used as a platform for decomposing monolithic applications. It provides an iterative approach to decompose an IIS/ASP.NET website into an application composed of multiple, manageable microservices.

Moving from a monolithic to microservice architecture provides the following benefits:
  • You can change one small, understandable unit of code and deploy only that unit.
  • Each code unit requires just a few minutes or less to deploy.
  • If there is an error in that small unit, only that unit stops working, not the whole application.
  • Small units of code can be distributed easily and discretely among multiple development teams.
  • New developers can quickly and easily grasp the discrete functionality of each unit.
Using Service Fabric as the hosting platform, we can convert a large IIS website into a collection of microservices as shown below:
An Introduction to Azure Fabric Reliable Services
In the picture above, we decomposed all the parts of a large IIS application into:
  • A routing or gateway service that accepts incoming browser requests, parses them to determine what service should handle them and forwards the request to that service.
  • Four ASP.NET Core applications that were formally virtual directories under the single IIS site running as ASP.NET applications. The applications were separated into their own independent microservices. The effect is that they can be changed, versioned, and upgraded separately. In this example, we rewrote each application using .Net Core and ASP.NET Core. These were written as Reliable Services so they can natively access the full-Service Fabric platform capabilities and benefits (communication services, health reports, notifications, etc.).
  • A Windows service called Indexing Service is placed in a Windows container so that it no longer makes direct changes to the registry of the underlying server but can run self-contained and be deployed with all its dependencies as a single unit.
  • An Archive service is just an executable that runs according to a schedule and performs some tasks for the sites. It is hosted directly as a stand-alone executable because we determined it does what it needs to do without modification, and it is not worth the investment to change.

Using Reliable Services:

An Azure Service Fabric application contains one or more services that run your code. This guide shows you how to create both stateless and stateful Service Fabric applications with Reliable Services. To get started with Reliable Services, you only need to understand a few basic concepts:

  • Service type: This is your service implementation. It is defined by the class you write that extends StatelessService and any other code or dependencies used therein, along with a name and a version number.
  • Named service instance: To run your service, you create named instances of your service type, much like you create object instances of a class type. A service instance has a name in the form of a URI using the “fabric:/” scheme, such as “fabric:/MyApp/MyService”.
  • Service host: The named service instances you create need to run inside a host process. The service host is just a process where instances of your service can run.
  • Service registration: Registration brings everything together. The service type must be registered with the Service Fabric runtime in a service host to allow Service Fabric to create instances of it to run.

Stateless Services

A stateless service is a type of service that is currently the norm in cloud applications. It is considered stateless because the service itself does not contain data that needs to be stored reliably or made available. If an instance of a stateless service shuts down, all its internal state is lost. In this type of service, the state must be persisted to an external store, such as Azure Tables or SQL Database, for it to be made available and reliable.

Follow steps from this URL to build it: https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-quick-start

A simple Proof of Concept (PoC) output looks like this-
An Introduction to Azure Fabric Reliable Services

The platform calls this method when an instance of a service is placed and ready to execute. For a stateless service, that simply means when the service instance is opened. A cancellation token is provided to coordinate when your service instance needs to be closed. In Service Fabric, this open/close cycle of a service instance can occur many times over the lifetime of the service. This can happen for several reasons, including:

  • The system moves your service instances for resource balancing.
  • Faults occur in your code.
  • The application or system is upgraded.
  • The underlying hardware experiences an outage.

This orchestration is managed by the system to keep your service available and perfectly balanced.

RunAsync() is the start point of these services. It should not block synchronously. Your implementation of RunAsync should return a Task or await on any long-running or blocking operations to allow the runtime to continue. Note in the while(true) loop in the previous example, a Task-returning await Task.Delay() is used. If your workload must block synchronously, you should schedule a new Task with Task.Run() in your RunAsync implementation.

Cancellation of your workload is a cooperative effort orchestrated by the provided cancellation token. The system will wait for your task to end (by successful completion, cancellation, or fault) before it moves on. It is important to honor the cancellation token, finish any work, and exit RunAsync() as quickly as possible when the system requests cancellation.

In this stateless service example, the count is stored in a local variable. But because this is a stateless service, the value that is stored exists only for the current lifecycle of its service instance. When the service moves or restarts, the value is lost.

Stateful Services

Service Fabric introduces a new kind of service that is stateful. A stateful service can maintain the state reliably within the service itself, co-located with the code that is using it. The state is made available by Service Fabric without the need to persist state to an external store.

To convert a counter value from stateless to highly available and persistent, even when the service moves or restarts, you need a stateful service.

Follow steps from this URL to build it: https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-quick-start

A simple POC output looks like below:
An Introduction to Azure Fabric Reliable Services

RunAsync() operates similarly in stateful and stateless services. However, in a stateful service, the platform performs additional work on your behalf before it executes RunAsync(). This work can include ensuring that the Reliable State Manager and Reliable Collections are ready to use.

Basic sample POC: https://github.com/Netwoven/WebsiteSamples/tree/master/ReliableServiceApp

References to sample projects: https://thecloudblog.net/post/hands-on-with-azure-service-fabric-reliable-services/ , https://www.innominds.com/blog/azure-service-fabric-and-stateless-services-the-gateway-to-development-and-management-of-microservices , https://www.c-sharpcorner.com/article/creating-services-with-azure-service-fabric/

Which One to Choose

Stateless reliable services are remarkably similar to what you may already know as Cloud Services.

  • They can be used to implement background, headless worker processes (like worker roles), or online services that you can connect to, i.e., web servers (like web roles). This distinction does not actually exist in Service Fabric as a reliable service can be implemented as a worker and/or web service.
  • As with Cloud Services, you define an instance count, and the runtime will create as many instances as possible of your service over the nodes of the cluster. If an instance fails or crashes, a new one will be automatically launched.
  • To implement a web service, you define which endpoints (port + protocol) you want to open, and the runtime will load-balance the requests between all running instances.
  • Service Fabric is currently web server agnostic, which means that there is no interface with IIS, and you have to self-host your own web server.

Stateful reliable services

Same concept as the stateless version, but with persistent storage implemented directly in the service.

  • State storage is performed through reliable collections, which are generic collections that ensure reliable storage of the data over the nodes running the service. There are currently 2 such collections available: a dictionary and a queue.
  • No instance counts here, but you must define the number of replicas that the runtime will create; the goal of those replicas is to ensure high availability and reliable persistence of state.
  • At any point in time, there is among your replicas a primary one, and the other ones run as secondary replicas. If the primary replica crashes, the runtime promotes a secondary to primary.
  • By default, all requests are routed by the runtime to the primary replica only and all updates to the state (reliable collections) are replicated to secondary replicas. That is a core difference between stateless and stateful reliable services: with stateless services, all instances can serve requests (as they do not share a state, or at least not within the Service Fabric) whereas, with stateful services, only the primary replica is “active” (by default).
  • It is possible to configure a stateful service to route requests to secondary replicas, but it is forbidden to update states on those (for obvious data consistency reasons)

Conclusion:

Azure fabric reliable services are extremely useful to modularize monolithic projects and in microservices architecture. A Stateless Service is ideally used for data transformation operations where no state needs to be persisted in the service itself. Note that this does not mean that it cannot save any state in a centralized data store. It can. Having said that, the data store cannot be owned by the service. For example, an image transformation service will just transform the image, with the logging, analytics, and other information being sent to either a centralized data store or to other services.

Thank you for reading the blog; we hope you found this useful in learning in detail about Azure Service Fabric Reliable services. Please reach out to us so that we can put our decades of Microsoft technologies experience and capabilities to work for your organization’s Digital Transformation.

The post An Introduction to Azure Fabric Reliable Services appeared first on Netwoven.

]]>
https://netwoven.com/azure-development/azure-service-fabric-reliable-services-an-introduction-to-azure-service-fabric/feed/ 0
Azure Universal Print Deployment Guide https://netwoven.com/azure-development/azure-universal-print-deployment-guide/ https://netwoven.com/azure-development/azure-universal-print-deployment-guide/#respond Wed, 11 May 2022 08:19:53 +0000 https://www.netwoven.com/?p=40958 Introduction: Universal Print is a modern print solution that organizations can use to manage their print infrastructure through cloud services from Microsoft. Universal Print runs entirely on Microsoft Azure. When… Continue reading Azure Universal Print Deployment Guide

The post Azure Universal Print Deployment Guide appeared first on Netwoven.

]]>
Introduction:

Universal Print is a modern print solution that organizations can use to manage their print infrastructure through cloud services from Microsoft. Universal Print runs entirely on Microsoft Azure. When it is deployed with Universal Print–compatible printers, it does not require any on-premises infrastructure. 

How to deploy Azure Universal Print? 

Universal Print is a Microsoft 365 subscription-based service that organizations use to centralize print management through the Universal Print portal. It is fully integrated with Azure Active Directory and supports single sign-on scenarios. 

Universal Print can be deployed with non-compatible printers by using Universal Print connector software. 

1. Architecture

Azure Universal Print Deployment Guide

2. Prerequisites

Cloud Requirements: 
  • An Active Azure AD (Active Directory) tenant (subscription not required) 
  • Universal Print license 
  • One Global Admin or Printer Administrator account with a Universal Print license 
W10 Clients Requirements: 
  • Build version 1903 or later required. 
  • An Internet Connection. 
  • The device can be AAD (Azure Active Directory) Joined, Hybrid AD Joined, or AAD Registered. 
User Requirements: 
  • Universal Print License assigned to each user and to the Printer Administrator. 
Connector Requirement: 
  • Windows 10 64-bit (Pro or Enterprise), version 1809 or later. 
  • Windows Server 2016 64-bit or later (Windows Server 2019 64-bit or later is recommended) 
  • NET Framework 4.7.2 or later. 
  • A continuous connection to the internet. 

3. Steps to configure Universal Print 

STEP 1 – Assign Universal Print License to users 

Universal Print license is by default included with business and educational Microsoft 365 and Windows 10 subscriptions, but it can also be purchased as a standalone license. We need to assign this license to the client as well as the account which will be used to login into the ‘Universal Print Connector’ application. 

STEP 2 – Install the UP Connector

Download the Universal Print connector from https://aka.ms/upconnector.  

Install the connector on the ‘Printer Server’ (A Physical system or Virtual Machine with Windows 10 64-bit (Pro or Enterprise) or Windows Server 2016 64-bit or Windows Server 2019 64-bit or later. 

After the installation, we need to register the UP connector on Azure, and to do so we need a ‘Global Admin’ or a ‘Printer Administrator’ account. Login with the credentials. 

Azure Universal Print Deployment Guide

Insert a name for the connector and register it on Azure Tenant. This name will be shown in Azure tenant under ‘Printer Connectors.’ 

Azure Universal Print Deployment Guide

Now if we go to the Azure portal and search for ‘Universal print’ in global search, under the resource menu we can find the connectors menu and here we can see the connector name that we have just registered. 

Azure Universal Print Deployment Guide
STEP 3 – Register the Printers 

Open the ‘Universal Printer Connector’ application in the printer server, now we need to register the printers on the cloud by selecting it from the connector interface. All the printers that are locally installed in the printer server will be visible under the ‘Available Printers’ section. Select all the printers that you want to register for and click on the ‘Register’ button. After this step, all the selected printers will be listed under the ‘Registered Printers’ section. 

Azure Universal Print Deployment Guide
After successfully completing the printer registration you need to remember to ‘Sign Out’ from the connector. 

As the user account used to log in to the connector does not represent a service account used by the connector, the service is visible in the Windows services but will only be used to register the printers from the connector interface to the Azure tenant and manage it. 

STEP 4 – Share the Printers 

Once the printers are successfully registered in the ‘Universal Print Connector’ application, the same will be visible in Azure Portal under printers. 

Now there are two ways to share the printers: 

Process 1- Select multiple Printers and click on ‘Share’ to immediately share the printers with the same name of each printer. Select users or groups that will have access to the printers. 

Azure Universal Print Deployment Guide

Process 2- Click on a printer name and in the next pop-up window click ‘Share Printer’ 

Azure Universal Print Deployment Guide
Azure Universal Print Deployment Guide

Type the desired share name (1) and select user or groups (2) then click on Share Printer (3) 

4. Add Printers on Win10 / Win 11 Clients 

Note: You cannot have more than one Work or School account configured to the profile. If you do, then the cloud printer will not be discovered, or you will not see the option to search for work printers.

Ref:https://docs.microsoft.com/en-us/universal-print/fundamentals/universal-print-troubleshooting-support-howto#discover-and-install-printer-on-client (Read ‘Discover and Install printer on Client’ section) 

Once the above steps are completed, follow the below steps to add cloud printers in client devices. 
  • Open “Add a Printer or Scanner” wizard from Start Menu
Azure Universal Print Deployment Guide
  • Select “Add a printers or Scanner”
Azure Universal Print Deployment Guide
  • If the cloud printer is shared with you then it should be auto discovered in the list as below. Click on add device…
Azure Universal Print Deployment Guide
  • Once the device is installed successfully it should show as “READY” and the printer should also be listed under “Printers & Scanners” list as below
Azure Universal Print Deployment Guide

Print a test page to verify the installation of the printer you have just added. 

We hope you found this blog useful. We can put our decades of Microsoft 365 and Azure Active Directory experience work for your organization’s digital transformation. Please reach out to know more.

The post Azure Universal Print Deployment Guide appeared first on Netwoven.

]]>
https://netwoven.com/azure-development/azure-universal-print-deployment-guide/feed/ 0
A Complete Walkthrough of Azure Cosmos DB and Why Should You Use It https://netwoven.com/azure-development/a-complete-walkthrough-of-azure-cosmos-db-and-why-should-you-use-it/ https://netwoven.com/azure-development/a-complete-walkthrough-of-azure-cosmos-db-and-why-should-you-use-it/#respond Wed, 05 Dec 2018 00:16:38 +0000 https://www.netwoven.com/?p=27711 Azure Cosmos DB is Microsoft’s fully managed, globally distributed and horizontally scalable cloud service. It’s a multi-model NoSQL database that provides independent scaling across all the Azure regions. Additionally, Azure… Continue reading A Complete Walkthrough of Azure Cosmos DB and Why Should You Use It

The post A Complete Walkthrough of Azure Cosmos DB and Why Should You Use It appeared first on Netwoven.

]]>
Azure Cosmos DB is Microsoft’s fully managed, globally distributed and horizontally scalable cloud service. It’s a multi-model NoSQL database that provides independent scaling across all the Azure regions. Additionally, Azure Cosmos DB has an extensive tooling and API support for different programming paradigms. Making it easier for users who have an existing NoSQL / Cloud database workload that they would like to move to Azure Cosmos DB.

This post is aimed at exploring all those features of Azure Cosmos DB that makes it a compelling proposition for your business.

NoSQL Databases

NoSQL databases now have been around for quite some time. But unlike the term relational database, it’s essentially an umbrella term that encompasses different technologies and formats to store and retrieve data. The primary reasons for preferring NoSQL database for storing your data are –

  • Rapidly changing data types: Data is now generated and stored in different data formats such as structured, unstructured and semi-structured data types. Traditional data stores only support storing data in structured formats. NoSQL provides efficient storage and query capabilities for unstructured and semi-structured data.
  • Schema Constraints: Relational data stores enforce rigid schemas for storing and managing data. Often that makes schema end up being a constraint to how quickly an application adapts to the changing business needs. NoSQL databases typically don’t require schema while storing data. However, that doesn’t mean there’s no schema. You can associate a schema with the data during data retrieval. This means that your application is not locked into a schema and thus it can easily adapt to changing application needs.
  • Performance and Scalability: With huge volumes of data that are being processed at a scale in different applications, relational data stores are unable to keep up with those size of loads. NoSQL data stores, on the other hand, provide capabilities of scaling out, replication and horizontal partitioning that enables businesses to provide high throughput and low latency along with high availability.

As mentioned previously, NoSQL includes databases built-on different models and technologies. And these databases can be broadly grouped under certain categories. The major categories of NoSQL databases among these are –

  • Columnar: Data is stored in groups of column families that are often accessed together. An instance of data can have any number of columns and these columns are grouped or aggregated as required for data retrieval. Examples – HBase, Cassandra, Amazon DynamoDB and Google BigTable

Amplify human ability with data

  • Key-Value: Data is represented as a combination of a unique attribute (key) and its related content (value). The application accessing the data is responsible for applying appropriate context (schema) to stored data. Examples – Redis, Riak, Berkeley DB, Couchbase and MemcacheDB
  • Document: A document, equivalent to rows in the relational database is a complex self-contained hierarchical data structures that contain key-values pairs or nested documents. A document is typically formatted in XML, JSON or BSON and is typically stored together in a collection. Examples – MongoDB, CouchDB, IBM Domino and DocumentDB (the precursor to Azure Cosmos DB).
  • Graph: Data is stored in a Graph database as a network (graph) of entities and relationships. The interpretation of the data is based on the relationship between different entities. So typical data retrieval requires fast traversal through the network to get the desired entities. Examples – Neo4j, OrientDB, and FlockDB

Azure Cosmos DB Features

In 2014, Microsoft introduced its first cloud-based NoSQL database called DocumentDB that provided low latency and high output. As the name suggests, it was a document-oriented NoSQL database that offered SQL like querying interface for retrieving the document data. Azure Cosmos DB is a progression of DocumentDB which was introduced in 2017. In addition to the existing DocumentDB capabilities – Microsoft added a lot more feature that made Azure Cosmos DB truly flexible, scalable and globally distributed cloud-based NoSQL database service.

Let’s look at some of these key features:

Global Distribution

Azure Cosmos DB is Azure Foundational (Ring 0) Service and hence its available in every location where Azure is available by default. So, you can setup instances of your Cosmos DB at any location that you want simply by activating the desired location from the Azure portal. This will ensure that your data is replicated and available for your users in the region with guaranteed low latency. Additionally, Cosmos DB provides automatic and manual failover that enables high availability and disaster recovery.

Performance

Performance in any application is typically measured through latency and output. With its global distribution, replication and failover options, Cosmos DB ensures that your customers continue to access their data with faster response time, no matter where they are. Cosmos DB also provides guaranteed throughput based on the provisioned output capacity. You can control this throughput at the database level or at the container level.

Pricing

Azure Cosmos DB pricing model is dependent on the required throughput and the storage necessary for your data. Under this model you reserve a capacity of output and storage based on your estimates and scale the throughput and storage independently, elastically and globally, to suit your application requirements. This ensures that you can get desirable performance and cost for your applications depending on the expected performance and data storage needs.

Multi-Model and Multi APIs

Azure Cosmos DB is a multi-model database that provides support to multiple data models through a single integrated platform. As of now – Azure Cosmos DB enables you to create containers that can store data in Key-Value, Columnar, Document or Graph data stores. Along with multi-model, Cosmos DB also provides users the flexibility to choose from a variety of familiar APIs to access the data such as –

  • SQL API or MongoDB API (for Document databases)
  • Table API (Key-Value databases
  • Cassandra API (Columnar databases)
  • Gremlin API (Graph databases)

With support for different data models and APIs, Azure Cosmos DB makes it very easy to store their data in the format, best suited for your application and query, by using the tools that you may be already familiar with.

5 Well-defined Consistency Levels

Azure Cosmos DB allows you to choose a consistency level that strikes a balance between latency, throughput, and availability that’s appropriate for your needs. The different levels of consistency offered are –

  • Strong Consistency: Ensures consistency across all nodes, in all regions, but this comes at the cost of overall performance.
  • Bounded Staleness Consistency: Provides a mean to set the level of freshness of data. Although this is still a strong consistency depending on the level of freshness that you choose. Dirty reads are possible.
  • Session Consistency: Ensures that there are no dirty reads for the writer but it’s possible to have dirty reads for other users. This is the default consistency level for Azure Cosmos DB.
  • Consistent Prefix: Ensures that the read data has been updated to all replicas. Under this level, the reads never see out-of-order writes.
  • Eventual Consistency: Provides no guarantees on the freshness of the data or on the order. However, this provides the fastest performance.

Tooling

In addition to the different APIs that you can store or query data in Azure Cosmos DB, you can also programmatically call these APIs using languages such as Java, .NET. Python, JavaScript and Go. Microsoft also provides strong tooling support around Cosmos DB that helps simplify a lot of operations. Some of the tools include –

  • dtui.exe and dt.exe: These are GUI and command line tools that help you to migrate your data from different sources such as JSON, BSON, SQL Server, MongoDB, DynamoDB, HBase, CSV and Blobs into Azure Cosmos DB.This migration tool can be downloaded from GitHub or you can directly download a the pre-compiled binary.
  • Azure Cosmos DB Emulator: As the name suggests, this tool provides a local environment that emulates Cosmos DB service so that you can use it for developing and testing needs without incurring any costs. Once you are satisfied with your results with the Cosmos DB emulator, you can deploy the data to the Cosmos DB instance in Azure from the emulator.The emulator can be downloaded from the this location in the Microsoft Download Center.
  • Data Explorer to Azure Cosmos DB Explorer: This is a standalone web-based tool that provides a one-stop interface to manage your Cosmos DB data. Apart from data management, the Azure Cosmos DB Explorer also provide temporary or permanent access to other users to the data in your containers who may not be able to access it through the Azure portal. It can also be used to share the results of your query with other users.To access the Azure Cosmos DB Explorer – go to https://cosmos.azure.com. You will need your account connection strings to be able to connect your database instance.
  • Capacity Planner: It’s a handy tool that provides you with a quick estimate of the approx. Request Units (RUs) that you will need for your planned workload. The capacity planner will help you fine-tune your throughput and storage estimate. And based on the estimated RUs for your requirements, you can then select the appropriate pricing model from the Azure portal for your containers.Click here to go to the capacity planner for Azure Cosmos DB

Azure Cosmos DB Usage Scenarios

Azure Cosmos DB is suitable for any high-performance application that requires global scale. It is specifically designed to handle applications that require low response times with massive amounts of reads and writes. Some of the cases where it makes a great fit are:

  • Globally Distributed Applications: Businesses that need to provide low latency data access to users at a massive scale over geographies and ensure high availability and disaster recovery across multiple data centers/regions.
  • IT and Telemetry Applications: Infrastructure to support ingestion of huge volumes of disparate data from many devices.
  • E-Commerce Platforms: Websites that need to scale elastically to handle seasonal traffic such as the Super Bowl or Black Friday
  • Recommendation/Classification Engines: Applications that collect customer data such as interests, browsing history, buying patterns and uses machine learning models to quickly provide predictive insights on customer behavior.
  • Operational Logging and Analytics: Applications that store and analyze huge volumes of log data and other associated data at a scale to provide operational insights quickly and accurately.
  • Gaming Applications: Applications that need to support sudden spurts in usage, along with super low latency required to provide an optimal gaming user experience.
  • Social Media Applications: Applications that run on a global scale and have unpredictable usage loads such as tweets, blog or image posts, comments or chat sessions.

Wrapping Up

As we discussed above, Azure Cosmos DB offers a wide range of features that make it easy and cost-effective. It also adds up the provision of data storage for your workloads that are globally distributed and provides guaranteed throughput and low latency. If you have a need to store and process planet-scale data in a NoSQL data store, with all its benefits Azure Cosmos DB should be your first choice to build the infrastructure.

In the next post – we will look at some of the design considerations involved while designing in Azure Cosmos DB database. Till then stay tuned.

However, if you are trying to import data from other databases to Cosmos DB, Netwoven can help! As an Elite Gold certified Microsoft Partner, we can help you develop, deploy or monitor apps in Azure as per your business needs.

The post A Complete Walkthrough of Azure Cosmos DB and Why Should You Use It appeared first on Netwoven.

]]>
https://netwoven.com/azure-development/a-complete-walkthrough-of-azure-cosmos-db-and-why-should-you-use-it/feed/ 0
Azure Infrastructure Management in a Nutshell https://netwoven.com/azure-development/azure-infrastructure-management-in-a-nutshell/ https://netwoven.com/azure-development/azure-infrastructure-management-in-a-nutshell/#comments Tue, 12 Jun 2018 18:34:22 +0000 https://www.netwoven.com/?p=26767 Microsoft Azure Infrastructure platform is the largest cloud platform for hosting IT infrastructure, either by rapidly replacing on-premise or inclining towards a hybrid approach. The foremost challenge for managing such… Continue reading Azure Infrastructure Management in a Nutshell

The post Azure Infrastructure Management in a Nutshell appeared first on Netwoven.

]]>
Microsoft Azure Infrastructure platform is the largest cloud platform for hosting IT infrastructure, either by rapidly replacing on-premise or inclining towards a hybrid approach. The foremost challenge for managing such versatile platforms is managing with the traditional centralized management model that we were practicing for on-premise IT infrastructure. To address this consequence Microsoft Azure came up with a new model of decentralized IT services. The path-forward for which is to encourage cloud-first adaptation. The decentralized IT infrastructure model comes with some of the evident advantages such as

  • Better DevOps flexibility.
  • A native cloud experience: Instant feature availability for subscription user.
  • Readily available marketplace solutions to choose from.
  • Optimized subscription limit issues.
  • Better control over groups and permissions.
  • Better control over provisioning and subscriptions.
  • Distributed ownership of Business group for billing and capacity management.

Moreover, the modern hybrid cloud continues to be managed as a solution that transitioning from On-premise IT management model to self -service native cloud solutions for monitoring, management, backup, and security across entire cloud platform.

Azure Infrastructure Management in a Nutshell

Azure Management Aspects

Azure management aspects consist of facilities that works for Azure cloud as well as for Azure hybrid environments to facilitate below oversights:

  • General IT and operational policy implementation, as approved by the subscription owner. Areas include
    • Compliance
    • Operations
    • Incident management.
  • Shared network connectivity over Site to Site VPN or dedicated connectivity over ExpressRoute, as needed.
  • Visibility into infrastructure inefficiencies and self-service tool development.

In this section, we will discuss various tools like monitoring, patching, inventory management, data recovery, security and compliance and Secure DevOps.

Azure Monitoring

The Purpose of Azure monitoring is to Create Visibility and access to a foundation set of metrics, alerts, and notifications across core Azure services for business groups. Provide insight into Business groups and service lines can view rich analytics and diagnostics across applications, as well as compute, storage, and network resources, including incongruity detection and proactive alerting. And finally, to enable optimization by understanding service lines and how users are engaging with their applications, identify flagged points, develop associates, and optimize the business impact of their solutions.

Patching and Inventory Management

This aspect of Azure management addresses to continuous upgrading and maintaining Azure cloud-based and on-premise infrastructure platform. This aspect encourages Azure-based self-service solution for business groups that gives them control over their patching and management environment while giving us the ability to centrally monitor for compliance and security purposes. The Features supporting this aspect are as follows:

  • Azure Update Management and Software Distribution for business groups from a SCCM hosted in Azure VM and policy-based update from Azure Intune
  • Enabling self-service patch management with operating system and application updates with Azure, including centralized compliance reporting
  • Inventory management through discovery, tracking, and management of IT assets using Intune and SCCM hosted in Azure VM

Data Recovery

Azure backup solution with which each business and service groups can safeguard, retain, and recover their data. The data recovery solutions address the following major concerns:

  • Recover business data from attacks by malicious software or malicious activity.
  • Recover from accidental deletion or data corruption.
  • Secure critical business data.
  • Maintain compliance standards.
  • Provide historical data recovery requirements for legal purposes.

Azure Backup as a self-service solution for business groups gives more control over how they perform their backups by provisioning them responsibility for backing up their business data because each business group has better knowledge of their data.

Security and Compliance

The decentralized model for Azure platform has attracted need for radical inspection when security and compliance is concern. To address this requirement Azure security and compliance model is kept centralized of all the cloud management solutions. The following necessities direct the typical application of security and compliance measures:

Analyze and investigate incidents
Analyze and investigate incidents
Detect threats before they happen
Detect threats before they happen
Perform security audits
Perform security audits
Automate data collection
Automate data collection
  • Azure Policy puts a barrier in subscriptions that keep business and service group automatically within governance restrictions. The policy helps control of settings by default, safe network configuration is limited by patterns to controlling the regions and types of Azure resources available for use, and ensuring data is stored with encryption enabled.
  • Automation is required to keep a hold on constantly changing Azure cloud environment, especially on DevOps for end-to-end automation with automated security. Automated security saves time and cost for apps that are frequently updated and helps quickly configure and deploy security.
  • Recurring security assurance at a definite security state and track breaking point from that state to maintain a consistent level of security assurance across the environment. This helps ensure that builds and deployments that are secure; stay secure from one release restatement to the next one.
  • Empower engineering teams for integrating a pre-approved workflow security created by DevOps. This way the process will be short and precise, without the hassle of Infra admin approval every time.
  • Secure DevOps environment is to have a clear understanding of operational risks in Azure cloud. For achieving this development team required ability to anticipate security state across DevOps stages and establish proficiencies to receive security alerts and reminders for significant intermittent activities.

Verdict

The versatile approach of Azure management by decentralizing all task-based aspects, while keeping security and compliance stay centralized are key factors for the success of managing Azure Cloud and Hybrid platform.

The post Azure Infrastructure Management in a Nutshell appeared first on Netwoven.

]]>
https://netwoven.com/azure-development/azure-infrastructure-management-in-a-nutshell/feed/ 1