Custom Development Archives - Netwoven https://netwoven.com/category/custom-development/ Netwoven Inc. Tue, 15 Aug 2023 00:41:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://netwoven.com/wp-content/uploads/2023/07/cropped-favicon-32x32.png Custom Development Archives - Netwoven https://netwoven.com/category/custom-development/ 32 32 Import JSON Files into MySql Database Table Part – 2 https://netwoven.com/custom-development/import-json-files-into-mysql-database-table-part-2/ https://netwoven.com/custom-development/import-json-files-into-mysql-database-table-part-2/#respond Tue, 21 Dec 2021 18:44:17 +0000 https://www.netwoven.com/?p=39890 Introduction: In Part 1 of the blog series, we have learnt till how to test ADLA procedure while importing structured and semi-structured Json files into MySql Database table using Azure… Continue reading Import JSON Files into MySql Database Table Part – 2

The post Import JSON Files into MySql Database Table Part – 2 appeared first on Netwoven.

]]>
Introduction:

In Part 1 of the blog series, we have learnt till how to test ADLA procedure while importing structured and semi-structured Json files into MySql Database table using Azure Data Factory & Data Analytics. In the second and last part of the series, we will learn how to create and assign permissions to service principal in a one-time setup.

Step 4: Create and assign permissions to service principal (one time setup)

We need a link service that will communicate with the Azure Data factory and the Azure Data Analytics. AS we will call the Azure Data Analytics U-SQL store procedure form Azure Data Factory.

Please go through the link for details.

After creating the service principle, we must add read and write permission so that it can read, write ADL job trough ADF.

Step 5: Create Azure Data factory component to generate CSV file from Json file

To perform the task, we have to create two linked services, two datasets, and one pipeline those will help us in future implementation.

Linked services: One link service needed to communicate Data Factory with Azure storage account. Another linked service needed to communicate Data factory with Azure Data Analytics account, in this case we need the service principal details.

Dataset: One dataset will relate to the row Json fie and another dataset will be connected to the storage accounts for output csv file.

Pipeline: We needed one pipeline that will content one U-SQL activity, that will call previously created U-SQL store procedure.

Once we click on “New” button in the ADLA Linked services option it will open a new window like below, where we have to put the service principal related information.

In the Script section we need another Linked service and dataset as below

We will call the ADA procedure like below

Now we have completed all the component needed to generate the csv file form Json file. To check it, we have start debugging the pipeline as below. Once the pipeline ran properly it will create a csv file in the output folder, that provided in the U-SQL procedure.

Step 6: Data Factory pipeline to import the converted csv formatted data to MySQL table

In this case we required two Linked service, one link will communicate with csv file, and another will be with MySql database (You may use any type of relational database). We required another pipeline that will content one “Copy Activity” and two dataset We can explain it by below screen shoot.

Configure source

We have to specify the source folder, that content the csv file. It will be connected with a linked service.

Configure Sink

In sink setting we will create a linked service that will connected to the MySql specific table where we need to insert the record from the csv file.

Configure mapping

In the mapping we will map the csv file column to corresponding MySql table column.

So, in this process we can store different structure Json file into a relational representation or any relational table like MySQL or MS SQL. Those record may be process to another application.

The post Import JSON Files into MySql Database Table Part – 2 appeared first on Netwoven.

]]>
https://netwoven.com/custom-development/import-json-files-into-mysql-database-table-part-2/feed/ 0
Choosing between Functional and Class Components in ReactJS Applications https://netwoven.com/custom-development/choosing-between-functional-and-class-components-in-reactjs-applications/ https://netwoven.com/custom-development/choosing-between-functional-and-class-components-in-reactjs-applications/#respond Tue, 16 Nov 2021 17:34:44 +0000 https://www.netwoven.com/?p=39713 Introduction: The choice between Functional and Class components was easy until Feb 2019. Functional components were stateless and Class components stateful. If there was a project that does not require… Continue reading Choosing between Functional and Class Components in ReactJS Applications

The post Choosing between Functional and Class Components in ReactJS Applications appeared first on Netwoven.

]]>
Introduction:

The choice between Functional and Class components was easy until Feb 2019. Functional components were stateless and Class components stateful. If there was a project that does not require maintenance of state or different usages of page life cycle methods, one would opt for Functional components for its simplicity, else the choice was Class components. Since most complex applications required state maintenance, Class components became extremely popular. But then ‘hooks’ were introduced in Functional components that added the ability of state and page lifecycle management. Gradually, Functional components were getting used more. Even behemoths such as Facebook, Netflix, Instagram are now using React Functional components, which begs the question, ‘Should we shift from Class to Functional components?

Functional Components:

Creating a Functional component

Functional components are simple, short and easy to write. Those who are used to object-oriented programming may find the syntax a little difficult to adapt initially but once they get a hold of it, it saves loads of lines of code. Below is a sample code-

const FunctionalComponent = (props) => {
cons[counter , setCounter]=useState(0);
 
 const increaseUser=()>{
     setCounter(counter+1);
 }

 React.useEffect(() => {
   return () => {
console.log("Done");
   };
 }, []);
 return <div>
            <h1>Welcome User:{props.username} </h1>
            <h3>User Count: </h3>
          <h2>{counter}</h2>
            <button onClick={increaseUser}>Add</button>
        </div>;
};

In the sample above, this Functional component gets user details as input and prints the user-name. Also, this has an additional button that increments the user counter by using useState. So here props and states are taken care of. The useEffect takes care of page lifecycle methods. The sample above has a return, so this is called on component unmount page event. Without the return keyword in useEffect, it represents both components mount and component update events.

Pros of Functional Components:
  • Functional components are simple functions that are easy to read. Code maintenances wise it is easy and understandable even when you refer after a long time.
  • There is no requirement for constructor and individual page life cycle methods. State and lifecycle handling is easily done using “hooks” like useState and useEffect.
  • There is no need to be concerned about the ‘this’ keyword which creates confusion between page scope, event scope, etc.
  • Also function binding, constructors are not required for component initialization.
  • Decoupling is easy in Functional components. It is extremely easy to identify and differentiate UI and logic, making the components effectively reusable.
  • There is scope for performance improvement making its performance better than Class components. Today the improvement is 6% but React team promises it can go up to 45%.
  • Debugging and testing are easier as these are simple JavaScript functions.
Cons of Functional Components:
  • In the versions before React 16, the hooks were not there, and state management was not possible in Functional components. So earlier versions cannot support state.
  • People who are used to object-oriented programming format, find using Class components much easier to pick up. The syntax is a little difficult to understand for those who are used to the Class model.
  • Stateful logic should ideally be separated out from Functional components and hence not making it reusable as a complete unit.
  • For complex components, code is harder to understand due to decoupling.
  • For complex state management, Redux might be required to support Functional components.

Class Components:

Creating a Class component

Class components in React are very similar to object-oriented programming languages. People used to C#, Java can very easily pick up this coding structure having constructors, page life cycle methods, etc. Below is a sample code-

export Class ClassComponent extends React.Component {
  constructor(props) {
    super(props);
    this.state = {
      counter: 0
    };
    this.increaseUser = this.increaseUser.bind(this);
  }
  componentWillUnmount() {
    console.log("Done");
  }

  increaseUser() {
    this.setState({counter: this.state.counter + 1});
  }

  render() {
    return (
       <div>
            <h1>Welcome User:{this.props.username} </h1>
            <h3>User Count: </h3>
          <h2>{this.state.counter}</h2>
            <button onClick={this.increaseUser}>Add</button>
        </div>
    );
  }
}

In this sample, the Class component gets user details as input and prints username. Also, this has an add button that increments the user counter by 1 using setState. The component is initialized in the constructer and the other used function increaseUser here is bound in the constructor. ComponentWillUnmount event is called for performing the action on unloading the component. So, both the code snippets provided following the different component structures here give the same output, but the syntax and structure are different. The number of lines of code is lesser in Functional component and simpler.

Pros of Class Components:
  • Till 2019, Class Components was the only means of state management and life cycle management in React and hence most complex projects used it
  • The coding format is very easy to learn if you are acquainted with any object-oriented programming language like C#, Java.
  • Stateful logic and UI can be maintained within the same component, keeping it coupled
  • Complex designs can be easily achieved by mimicking object-oriented concepts
  • Class components are sufficient for complex state management, there is no requirement of Redux.
Cons of Class Components:
  • Code maintenance is more complex than Functional components. The number of lines of code is more and readability is more complex
  • Performance of Functional components is better now and will further improve, making performance a bottleneck for Class components in the future
  • Decoupling is not easy in Class components. All UI and stateful logic are maintained in the same component making reusability not as flexible as Functional components.
  • Testing, simulation and debugging are more complex than Functional components.

Functional components or Class components- Which one to Choose when architecting a project?

The choice is still a difficult one. The trend is to move towards Functional components and yet React has no plans to deprecate Class components as it is also a recommended approach. React forum supports both models, and it is at the discretion of an individual to choose between them.

People who have been developing complex applications for years now in React have become extremely comfortable with the Class component model and it is difficult to move out of this model when it is a tried and tested one. But going with the flow, extensive studies suggest that in later versions, there might be a significant improvement in performance with use of Functional components, as well as code maintenance would be quite easier. It is suggested that, as recommended now by experts, to try the Functional components with hooks in any one of the projects. Unless you try it out, you will not understand the difference, as initially syntax change is a big change one needs to overcome. Only upon overcoming that, one can understand that Functional components is simpler to code, number of lines is lesser, it is cleaner, easier to maintain and improves performance. Highly complex projects with coupled modules are not ideal for Functional components, so you must choose wisely based on the project complexity, but you should try the Functional components.

Conclusion:

Since most large organizations are going for ‘hooks’ and Functional components, it is worth trying to experience its benefits and judge with one’s own experience as to which one is better. The only challenge is to adapt to the significant syntax changes and decide to use it in less complex projects where interdependency between modules is less. Both Functional and Class components will keep ruling the React world and one has to decide per project basis as to which one to choose.

The post Choosing between Functional and Class Components in ReactJS Applications appeared first on Netwoven.

]]>
https://netwoven.com/custom-development/choosing-between-functional-and-class-components-in-reactjs-applications/feed/ 0
Import JSON Files into MySql Database Table Part – 1 https://netwoven.com/custom-development/import-json-files-into-mysql-database-table-part-1/ https://netwoven.com/custom-development/import-json-files-into-mysql-database-table-part-1/#respond Wed, 29 Sep 2021 09:24:34 +0000 https://www.netwoven.com/?p=39546 Introduction: In most migration-related activities, you might face situations where you have to import JSON file to any relational Database. Sometimes, the JSON files data structure varies from file to… Continue reading Import JSON Files into MySql Database Table Part – 1

The post Import JSON Files into MySql Database Table Part – 1 appeared first on Netwoven.

]]>
Introduction:

In most migration-related activities, you might face situations where you have to import JSON file to any relational Database. Sometimes, the JSON files data structure varies from file to file. As the size of JSON files is large, it may not be possible to import them manually, and using any customs application may take a long time to import data to the relational database. Also, the execution time of customs applications may affect the project timeline. 

Screenshot 1:
Screenshot 2:

If we observe the above two screenshots closely, we can notice that the two JSON files are not in the same format, but we have to store the files into a single relational table.

Microsoft Azure provides an effective solution to overcome this challenge. In this blog, you will learn how to handle such a situation.

Pre-requisites

  1. Azure subscription
  2. Azure Data Lake storge account (Data Lake Gen 1 or Gen2).
  3. Azure Data Lake Analytics account
  4. Azure Data Factory account
  5. Row JSON data file

Steps to be performed

  1. Upload row JSON Data into Data Lake storage
  2. Create U-SQL procedure in Data Analytics
  3. Test the procedure
  4. Create and assign permissions to service principal (one time setup)
  5. Create Azure Data factory pipeline to process the row JSON and convert to relational structure (in this case, CSV file).
  6. Create another Data Factory pipeline to import the converted CSV formatted data to any relational Database (In this case, MySQL)
Step 1: Upload row Json Data into Data Lake storge:

To perform any operation on the row JSON, you must first upload it on the storage account. You can use Azure Data Lake Gen 1 or Gen 2 to store the row JSON.

To upload the row JSON file, you can use Azure Storage Explorer and register with Azure subscription details or you can use other processes to upload it.

Once the row JSON files are uploaded in your desired location, you can check the data within the Azure Data Lake account.

Step 2: Create U-SQL procedure in Data Analytics:

Now we must create U-SQL store procedure that will help us convert multi-structure JSON file to relation structure. In this step, we will be able to convert the JSON data to CSV format with PIPE (|) separation. To do that, we have to create a Database in Azure Data Lake Analytics using ‘New Job’ option like below-

Then run the U-SQL script to create Database-

DROP DATABASE IF EXISTS <YourDatabaseName;
CREATE DATABASE <YourDatabaseName>;

Now register 2 DLL in the newly created Database Newtonsoft.Json and Microsoft.Analytics.Samples.Formats. Download the DLL form from the link. Now register both DLL into the storage and register it into the Database.

CREATE ASSEMBLY IF NOT EXISTS [Newtonsoft.Json] FROM <YourStoragePath>
CREATE ASSEMBLY IF NOT EXISTS [Microsoft.Analytics.Samples.Formats] FROM <YourStoragePath>

Now we have to run the U-SQL script below, that creates the procedure. You can modify the script as per your requirement- input, output file path.

CREATE PROCEDURE [SlackToTeams].dbo.uspCreateDirectMessageCsv(@ThreadID string)
AS
BEGIN
REFERENCE ASSEMBLY [CongaSlackToTeams].[Newtonsoft.Json];
REFERENCE ASSEMBLY [CongaSlackToTeams].[Microsoft.Analytics.Samples.Formats]; 
USING Microsoft.Analytics.Samples.Formats.Json;
//These external parameters will be populated by ADF based on the time slice being executed.
DECLARE EXTERNAL @ThreadID string ="D5123EFLH5";
DECLARE @InputPath string = @"/Approved_Direct_Message/"+@ThreadID+"/{*}.json";
DECLARE @OutputFile string = @"/DirectMessageFile/Messages/"+@ThreadID+".csv";
@RawData = 
EXTRACT 
 [type] string
,[subtype] string
,[ts] string
,[user] string
,[MessageTS] string
,[text] string
,attachments string
,[files] string
,[bot_id] string
,[username] string
,thread_ts string
,reply_count string
,reply_users_count string
,latest_reply string
,[reply_users] string
,[replies] string
FROM @InputPath
USING new JsonExtractor();
@CreateJSONTuple = SELECT 
[type] 
,[subtype] 
,[ts] 
,[user] 
,[MessageTS] 
,[text] 
,attachments 
,[files] 
,[bot_id] 
,[username] 
,thread_ts 
,reply_count 
,reply_users_count 
,latest_reply 
,[reply_users] 
,[replies] 
,JsonFunctions.JsonTuple([files])?? NULL AS FileData 
FROM @RawData;
@DataSet =
SELECT
 @ThreadID AS ThreadID
,[type] ?? "" AS MessageType
,[subtype] ?? "" AS MessageSubType
,[ts] ?? "" AS MessageTS
,[user] ?? "" AS user
,[text]  ?? "" AS MessageText
,attachments  ?? "" AS attachments
,FileData["id"] ?? "" AS FileID
,[files]  ?? "" AS filesTxt
,[bot_id]  ?? "" AS BotID
,[username]  ?? "" AS Username
,thread_ts  ?? "" AS ThreadTS
,reply_count  ?? "" AS ReplyCount
,reply_users_count  ?? "" AS ReplyUsersCount
,latest_reply ?? "" AS LatestReply
,reply_users ?? "" AS ReplyUsers
,[replies] ?? "" AS Replies
FROM @CreateJSONTuple;
OUTPUT @DataSet
TO @OutputFile
USING Outputters.Text(outputHeader:true,delimiter:'|');
END;
Step 3: Test the ADLA Procedure

Before placing the new procedure into the Data factory, we must test it to see if it is working or not. Open a new Job and call the procedure, provide the parameter in it as mentioned below-

[SlackToTeams].dbo.uspCreateDirectMessageCsv("ABCDE123");

Then we will be able to see the output CSV file in the output folder.

Open the CSV file and check if it has created all the columns we defined in the procedure with PIPE (|) separation.

In part 2 of the series, we will learn how to create and assign permissions to service principal and import CSV file to MySql database(or any relational DB)

The post Import JSON Files into MySql Database Table Part – 1 appeared first on Netwoven.

]]>
https://netwoven.com/custom-development/import-json-files-into-mysql-database-table-part-1/feed/ 0
How To – Bulk Copy Data from ORACLE to SQL Server https://netwoven.com/custom-development/how-to-bulk-copy-data-from-oracle-to-sql-server/ https://netwoven.com/custom-development/how-to-bulk-copy-data-from-oracle-to-sql-server/#comments Wed, 23 Jun 2021 09:14:44 +0000 https://www.netwoven.com/?p=38339 Case Study: As part of migration activities some users may require transferring very large sets of data from a Remote Oracle DB server to another Microsoft SQL Server Database. To… Continue reading How To – Bulk Copy Data from ORACLE to SQL Server

The post How To – Bulk Copy Data from ORACLE to SQL Server appeared first on Netwoven.

]]>
Case Study:

As part of migration activities some users may require transferring very large sets of data from a Remote Oracle DB server to another Microsoft SQL Server Database.

To make it a bit more challenging, my task involved a bit more complexity to move millions of records from large number of tables without any discrepancy and within an acceptable timeline.

Technical Options:

There are numerous approaches to bulk copy records from Oracle to SQL Server. Following are a few options which may be considered.

  1. Microsoft SQL Server Migration Assistant for Oracle
  2. Import Using Oracle Client and SQL Server Management Studio
  3. Linked Server on SQL Server pointing to the Oracle database
  4. Export to CSV file and import to SQL Server via bulk copy

There may be driving factors to choose one option over the other.

Of the above listed options, we preferred executing with Option #2.

In following few sections I will demonstrate the basic requirements, system setup and configurations required to transfer the data.

Pre-requisites:

Important Note: During our evaluation, we faced some compatibility issues between Oracle Database Instant Client 64 bit and SQL Server Management Studio during connectivity, so we switched back to 32 bits of Oracle Instant Client. So, it’s suggested to use the 32 bit version.

You may also like: Learn how to proactively identify and protect your sensitive information

Instructions for installing Oracle Instant Client on Windows:

  1. Create a directory for the Oracle client components e.g., “c:\oml4rclient_install_dir”
  2. Go to the Oracle Database Instant Client download page.
  3. In the “Instant Client for Microsoft Windows” section choose Instant Client for Microsoft Windows (32-bit).
  4. From the next page download the “Basic Package” and Save the file in directory created in Step 1
  5. Unzip the file. The files are extracted into a subdirectory called instantclient_version, where version is your version of Oracle client. e.g., c:\oml4rclient_install_dir\instantclient_19_9
  6. Return to the Instant Client Downloads for Microsoft Windows (x64) page.
  7. Accept the license agreement and select Instant Client Package – SDK for your version of Oracle Database.
  8. Save the file in the installation directory that you created in Step 1.
  9. Unzip the file. The files are extracted into the instantclient_version subdirectory.
  10. Add the full path of the Instant Client to the environment variables OCI_LIB64 and PATH. The following steps set the variables to the path used in this example, c:\ oml4rclient_install_dir\instantclient_19_9:
  11. a. In Windows Control Panel, choose System, then click Advanced system settings.
  12. b. On the Advanced tab, click Environment Variables.
  13. c. Under System variables, create OCI_LIB64 if it does not already exist. Set the value
  14. of OCI_LIB64 to c:\ oml4rclient_install_dir \instantclient_19_9.
  15. d. Under System variables, edit PATH to include c:\ oml4rclient_install_dir\instantclient_19_9.

Steps to Import data:

After all the system setup is done, follow the below steps to import the data.

  • Open the SQL Server management studio and connect to the SQL Server.
  • Create a new database with a suitable name.
  • Right Click on the newly created database and select Task-> Import Data
How To – Bulk Copy Data from ORACLE to SQL Server
  • Click Next
How To – Bulk Copy Data from ORACLE to SQL Server
  • Select “.Net Framework Data Provider for Oracle” from list of options available from Data source dropdown.
How To – Bulk Copy Data from ORACLE to SQL Server
  • Under Security section provide oracle username and password.
  • Under Source section enter the Data Source in following format:
  • [Oracle Server URL:Port(if not running on default port) / NameSpace]
  • Click on next. On successful connection with the oracle server, it will proceed to the next screen
  • Select Destination as SQL Server Native Client, Server Name, SQL Server Credential and Database. Click Next.
How To – Bulk Copy Data from ORACLE to SQL Server
  • From the next screen select “Copy data from one or more tables or views” and click on Next.
How To – Bulk Copy Data from ORACLE to SQL Server
  • From the next screen select all the tables or only the tables which are required.
  • Click on Edit Mappings Button and from the popup window select destination schema name. In our case selected the SQL Server default ‘dbo’ schema.
How To – Bulk Copy Data from ORACLE to SQL Server
  • Check Run Immediately checkbox and click on the Next button.
How To – Bulk Copy Data from ORACLE to SQL Server
  • On the final screen Click on the Finish button to start coping of data.
How To – Bulk Copy Data from ORACLE to SQL Server
  • Once done Explore the SQL Server Database and verify the data with Oracle DB.

Summary:

Our choice of copy option was driven by the following factors:

  • Very less setup of activities
  • Coping of data from multiple tables is fully automated, hence there is little chance of data discrepancy

Download the Datasheet to learn more about Netwoven’s Information Protection and Compliance service.

Download the Solution Brief to learn how Netwoven’s solution proactively identifies and protects your sensitive data.

The post How To – Bulk Copy Data from ORACLE to SQL Server appeared first on Netwoven.

]]>
https://netwoven.com/custom-development/how-to-bulk-copy-data-from-oracle-to-sql-server/feed/ 3
Use ReactJS to Connect to SharePoint (Online/On Premises) from Localhost https://netwoven.com/sharepoint-custom-development/use-reactjs-to-connect-to-sharepoint-online-on-premises-from-localhost/ https://netwoven.com/sharepoint-custom-development/use-reactjs-to-connect-to-sharepoint-online-on-premises-from-localhost/#comments Wed, 07 Apr 2021 07:38:00 +0000 https://www.netwoven.com/?p=37657 Introduction: Traditionally, SharePoint custom applications are created using SPFx in SharePoint 2016 and 2019. However, this needs additional server configuration and the react version/SPFx version is also very old. Today,… Continue reading Use ReactJS to Connect to SharePoint (Online/On Premises) from Localhost

The post Use ReactJS to Connect to SharePoint (Online/On Premises) from Localhost appeared first on Netwoven.

]]>
Introduction:

Traditionally, SharePoint custom applications are created using SPFx in SharePoint 2016 and 2019. However, this needs additional server configuration and the react version/SPFx version is also very old.

Today, we can easily use ReactJS to create single page custom applications both for on-prem and online SharePoint covering 2013,2016 and 2019. Although, currently it applies only for SPO classic sites, the ease of development and deployment makes the choice of ReactJS very popular.

The effort here is to describe a step by step approach to create a custom application using latest ReactJS and PnPJS. Hope this will serve as a quick reference to the developers interested in building custom app in SPO.

Project Configuration

  1. Create a new React App using the following command
    npx create-react-app my-app
    cd my-app    
    
  2. Open the project in VSCode or WebStorm or any other editor
  3. Open the terminal
  4. Run the below command
    npm install concurrently sp-rest-proxy –-save-dev
    
  5. A successful installation can be confirmed when the below shows up in package.json Use ReactJS to Connect to SharePoint (Online/On Premises) from Localhost
  6. The following command can be run to use pnp js and bootstrap
    npm install @pnp/sp react-bootstrap
  7. Create “api-server.js” file at the root of the solution where package.json file resides
  8. Open “api-server.js” file and add the below code block
    const RestProxy = require('sp-rest-proxy');
    const settings = {
      configPath: './config/private.json', 
      // Location for SharePoint instance mapping and credentials
      port: 8081, // Local server port
      //staticRoot: 'node_modules/sp-rest-proxy/static', 
      // Root folder for static content
    };
    
    const restProxy = new RestProxy(settings);
    restProxy.serve();
    
    
  9. Add the below lines under the package.json scripts block
    "proxy": "node ./api-server.js",
    "startServers": "concurrently --kill-others \"npm run proxy\" \"npm run start\""
    
  10. Confirm that the package.json scripts block appears ike the one below
    "scripts": {
      "start": "react-scripts start",
      "build": "react-scripts build",
      "test": "react-scripts test",
      "eject": "react-scripts eject",
      "proxy": "node ./api-server.js",
      "startServers": "concurrently --kill-others \"npm run proxy\" \"npm run start\""
    }
    
  11. Add below code block to the end of package.json
    "proxy": "http://127.0.0.1:8081"
    

Successful implementation of all the above changes can be confirmed when packge.json file appears like the one below

{
  "name": "sp-react-app",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "@pnp/sp": "^2.0.5",
    "@testing-library/jest-dom": "^4.2.4",
    "@testing-library/react": "^9.5.0",
    "@testing-library/user-event": "^7.2.1",
    "bootstrap": "^4.4.1",
    "react": "^16.13.1",
    "react-bootstrap": "^1.0.1",
    "react-dom": "^16.13.1",
    "react-scripts": "3.4.1"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject",
    "proxy": "node ./api-server.js",
    "startServers": "concurrently --kill-others \"npm run proxy\" \"npm run start\""
  },
  "eslintConfig": {
    "extends": "react-app"
  },
  "browserslist": {
    "production": [
      ">0.2%",
      "not dead",
      "not op_mini all"
    ],
    "development": [
      "last 1 chrome version",
      "last 1 firefox version",
      "last 1 safari version"
    ]
  },
  "devDependencies": {
    "concurrently": "^5.2.0",
    "sp-rest-proxy": "^2.11.1"
  },
  "proxy": "http://127.0.0.1:8081",
  
}

Setup connection to SharePoint

Open terminal to project folder and run the below command.

npm run proxy

Provide the SharePoint site URL, username and password when prompted to start the proxy at localhost:8081. Open http://localhost:8081 in browser and validate that the rest API page can be seen.

Use ReactJS to Connect to SharePoint (Online/On Premises) from Localhost

A rest url can also be provided to to validate the connection. “_api/web”

Use ReactJS to Connect to SharePoint (Online/On Premises) from Localhost

Setup @Pnp/Sp

Open App.js file and add the below constructor.

constructor(props) {
  super(props);

  sp.setup({
    sp: {
      headers: {
        Accept: 'application/json;odata=verbose',
      },
      baseUrl: 'http://localhost:3000/',
    },
  });
}

Connect to SharePoint from Application

Create a new method to get current user information in App.js file.

getCurrentUser = () => {
   sp.web.currentUser.get().then((currentUser) => {
     console.log('currentUser', currentUser);            
   });  
};

Call the above method in componentDidMount().

Run the project

Open terminal in and run below command.

npm run startServers

Open console of http://localhost:3000 and to see current user information from SharePoint.

Takeaways

We can use this concept to create any custom application in SharePoint where particularly we don’t have an option to create a SPFx solution and want to use ReactJS or any other latest JavaScript framework.

Feel free to comment of get in touch in case you need any clarifications or further help.

The post Use ReactJS to Connect to SharePoint (Online/On Premises) from Localhost appeared first on Netwoven.

]]>
https://netwoven.com/sharepoint-custom-development/use-reactjs-to-connect-to-sharepoint-online-on-premises-from-localhost/feed/ 3
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 2 https://netwoven.com/custom-development/how-to-programmatically-upload-large-files-to-yammer-using-rest-api-and-aad-token-part-2/ https://netwoven.com/custom-development/how-to-programmatically-upload-large-files-to-yammer-using-rest-api-and-aad-token-part-2/#comments Tue, 09 Mar 2021 08:00:56 +0000 https://www.netwoven.com/?p=37239 Introduction In continuation to the previous part of the series, where we created an AAD app, In this part we progress to create a C# console application to fetch AAD… Continue reading How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 2

The post How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 2 appeared first on Netwoven.

]]>
Introduction

In continuation to the previous part of the series, where we created an AAD app, In this part we progress to create a C# console application to fetch AAD token by user login prompt and use the generated token to upload large files using Yammer API.

The pre-requisites remain the same as the previous article.

Creating the App

  1. Open Visual studio and create one C# (.NET Framework) solution of type Console Application.
  2. After the solution is created add NuGet package reference of the following packages
    • Microsoft.Identity.Client (For AAD Token generation)
    • Restsharp (For JSON Deserialization)
    • Newtonsoft.Json (For Making Rest Calls)
  3. Add the Azure AD App ID, Tenant ID and Yammer Network ID as following in the app.config. Replace with Correct values as per your AD App and Tenant settings.
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7.2" />
    </startup>
  <appSettings>
    <add key="TenantId" value="XXXXX-XXXX-XXXXX-XXXX-XXXXXXX" />
    <add key="AzureADAppClientID" value="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" />
    <add key="YammerNetworkID" value="0000000" />
  </appSettings>
</configuration>

Adding Helper and Model Classes

We are going to use two Helper files “MicrosoftIdentityClientTokenHelper.cs”, “MicrosoftTokenCacheHelper.cs” for Microsoft Identity Client configuration and token cashing and Model classes for capturing responses from Yammer API.

MicrosoftIdentityClientTokenHelper.cs:

public static class MicrosoftIdentityClientTokenHelper
    {
        static MicrosoftIdentityClientTokenHelper()
        {
            if (!string.IsNullOrEmpty(ClientId) && !string.IsNullOrEmpty(Tenant))
            {
                _clientApp = PublicClientApplicationBuilder.Create(ClientId)
                    .WithAuthority($"{Instance}{Tenant}")
                    .WithDefaultRedirectUri()
                    .Build();
                MicrosoftTokenCacheHelper.EnableSerialization(_clientApp.UserTokenCache);
            }
        }
        private static string ClientId = ConfigurationManager.AppSettings["AzureADAppClientID"];

        private static string Tenant = ConfigurationManager.AppSettings["TenantId"];
        private static string Instance = "https://login.microsoftonline.com/";
        private static IPublicClientApplication _clientApp;

        public static IPublicClientApplication MicrosoftIdentityPublicClientApp { get { return _clientApp; } }
    }

MicrosoftTokenCacheHelper.cs:

public static class MicrosoftTokenCacheHelper
    {
        public static readonly string CacheFilePath = System.Reflection.Assembly.GetExecutingAssembly().Location + ".msalcache.bin3";

        private static readonly object FileLock = new object();

        public static void BeforeAccessNotification(TokenCacheNotificationArgs args)
        {
            lock (FileLock)
            {
                args.TokenCache.DeserializeMsalV3(File.Exists(CacheFilePath)? ProtectedData.Unprotect(File.ReadAllBytes(CacheFilePath),null,
                    DataProtectionScope.CurrentUser): null);
            }
        }

        public static void AfterAccessNotification(TokenCacheNotificationArgs args)
        {
            if (args.HasStateChanged)
            {
                lock (FileLock)
                {
                    File.WriteAllBytes(CacheFilePath,ProtectedData.Protect(args.TokenCache.SerializeMsalV3(),null,DataProtectionScope.CurrentUser));
                }
            }
        }

        internal static void EnableSerialization(ITokenCache tokenCache)
        {
            tokenCache.SetBeforeAccess(BeforeAccessNotification);
            tokenCache.SetAfterAccess(AfterAccessNotification);
        }
    }

Models.cs

public class YammerCreateFileUploadSessionResponse
    {
        public string url { get; set; }
        public string filename { get; set; }
        public long uploaded_file_id { get; set; }
        public long uploaded_file_version_id { get; set; }
        public bool is_new_file { get; set; }
        public string storage_type { get; set; }
        public string used_path { get; set; }
    }

    public class YammerCompleteDirectUploadSessionResponse
    {
        public long id { get; set; }
        public int network_id { get; set; }
        public string url { get; set; }
        public string web_url { get; set; }
        public string type { get; set; }
        public string name { get; set; }
        public string original_name { get; set; }
        public string full_name { get; set; }
    }
    public class YammerUploadToSharepointResponse
    {
        public Uri ContentDownloadUrl { get; set; }
        public string id { get; set; }
    }

Getting AAD Token for Yammer using Microsoft Authentication Library (MSAL)

Please use the following code to show the Microsoft Authentication Prompt where the user can enter his/her credential and login.

How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token
Following code block tries to get the token in the below mentioned way:
  • If the user information is already cached and token is still valid existing token is returned
  • If user info is cached but token has expired, then it automatically renews the token using renew token
  • If any of the above does not works then it shows the above prompt for sign-in
public static string GetYammerBearerToken(IPublicClientApplication oMicrosoftIdentityPublicClientApp)
        {
            AuthenticationResult authResult = null;
            string[] scopes = new string[] { "https://api.yammer.com/user_impersonation" };

            var cachedAccounts = oMicrosoftIdentityPublicClientApp.GetAccountsAsync().Result;
            var firstAccount = cachedAccounts.FirstOrDefault();

            try
            {
                if (firstAccount != null)
                {
                    authResult = oMicrosoftIdentityPublicClientApp.AcquireTokenSilent(scopes, firstAccount).ExecuteAsync().Result;
                    return authResult.AccessToken;
                }
                else
                {
                    throw new MsalUiRequiredException("401","No Cached User Account. Please Sign-in.");
                }
            }
            catch (MsalUiRequiredException ex)
            {
                System.Diagnostics.Debug.WriteLine($"MsalUiRequiredException: {ex.Message}");

                try
                {
                    authResult = oMicrosoftIdentityPublicClientApp.AcquireTokenInteractive(scopes)
                        .WithAccount(cachedAccounts.FirstOrDefault())
                        .WithPrompt(Prompt.SelectAccount)
                        .ExecuteAsync().Result;
                    return authResult.AccessToken;
                }
                catch (MsalException msalex)
                {
                    Console.WriteLine(msalex.Message);
                    throw;
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
                throw;
            }
        }

Preparation Before Calling the Yammer File Upload API

Before going to the further steps, you should have a Yammer group ID and the Network ID of your Yammer Tenant.

For Getting Yammer Network ID using Rest API follow the method mentioned below:
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token
For getting the Group ID use any of the following method
  • Manually from Old Yammer: Please copy the value of feedId from the address bar.
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token
  • From the New Yammer, manually copy the value after the /group/ from the address bar.
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token

Calling Yammer API to Upload Large Files

For uploading large files to Yammer we need to call three consecutive API calls to complete the process and the Uploaded files will be displayed in the Yammer group.

Following are the sequence of Calls:

  1. POST Call to https://filesng.yammer.com/v3/createUploadSession to create the upload session.
  2. PUT call to the session URL returned from the above method and upload the file bytes in multiple chunks (Each chunk should not exceed more than 4MB)
  3. Once all the chunks are uploaded in previous step make a POST call to https://filesng.yammer.com/v3/completeDirectUploadSession to mark the completion of the upload process. After this call Yammer will show the file in Files section of the Yammer Group.

In each of the above rest calls we will pass the AAD token in header of the call as following:

request.AddHeader("Authorization", "Bearer " + YammerBearerToken);
Create Upload Session:
private static YammerCreateFileUploadSessionResponse CreateYammerUploadSession(object oRequest, ref string YammerBearerToken)
        {
            var oMicrosoftIdentityPublicClientApp = MicrosoftIdentityClientTokenHelper.MicrosoftIdentityPublicClientApp;
            YammerCreateFileUploadSessionResponse response = new YammerCreateFileUploadSessionResponse();
            YammerBearerToken = YammerBearerToken.Replace(Environment.NewLine, "");
            try
            {

                Console.WriteLine("Calling API createUploadSession.");
                var host = "https://filesng.yammer.com";

                var client = new RestClient(host);
                var request = new RestRequest("/v3/createUploadSession", Method.POST);

                request.AddHeader("Authorization", "Bearer " + YammerBearerToken);
                request.AddHeader("Content-Type", "application/json; charset=UTF-8");
                request.AddJsonBody(JsonConvert.SerializeObject(oRequest));

                var webResponse = client.Execute(request);
                if (!webResponse.IsSuccessful)
                {
                    throw new WebException(webResponse.StatusDescription);
                }

                return JsonConvert.DeserializeObject<YammerCreateFileUploadSessionResponse>(webResponse.Content);
            }
            catch
            {
                throw;
            }

        }
Upload File by Session URL:
private static YammerUploadToSharepointResponse UploadFileBySession(string url, byte[] file)
        {
            int fragSize = 1024 * 1024 * 4;
            var arrayBatches = ByteArrayIntoBatches(file, fragSize);
            int start = 0;
            string response = "";

            foreach (var byteArray in arrayBatches)
            {
                int byteArrayLength = byteArray.Length;
                var contentRange = " bytes " + start + "-" + (start + (byteArrayLength - 1)) + "/" + file.Length;

                using (var client = new HttpClient())
                {
                    var content = new ByteArrayContent(byteArray);
                    //content.Headers.Add("Content-Length", byteArrayLength.ToProperString());
                    content.Headers.Add("Content-Range", contentRange);

                    response = client.PutAsync(url, content).Result.Content.ReadAsStringAsync().Result;
                }

                start = start + byteArrayLength;
            }
            return JsonConvert.DeserializeObject<YammerUploadToSharepointResponse>(response);
        }
        private static IEnumerable<byte[]> ByteArrayIntoBatches(byte[] bArray, int intBufforLengt)
        {
            int bArrayLenght = bArray.Length;
            byte[] bReturn = null;

            int i = 0;
            for (; bArrayLenght > (i + 1) * intBufforLengt; i++)
            {
                bReturn = new byte[intBufforLengt];
                Array.Copy(bArray, i * intBufforLengt, bReturn, 0, intBufforLengt);
                yield return bReturn;
            }

            int intBufforLeft = bArrayLenght - i * intBufforLengt;
            if (intBufforLeft > 0)
            {
                bReturn = new byte[intBufforLeft];
                Array.Copy(bArray, i * intBufforLengt, bReturn, 0, intBufforLeft);
                yield return bReturn;
            }
        }
Complete Upload Session:
private static YammerCompleteDirectUploadSessionResponse CompleteYammerUploadSession(object oRequest, ref string YammerBearerToken)
        {
            YammerCreateFileUploadSessionResponse response = new YammerCreateFileUploadSessionResponse();
            YammerBearerToken = YammerBearerToken.Replace(Environment.NewLine, "");
            var oMicrosoftIdentityPublicClientApp = MicrosoftIdentityClientTokenHelper.MicrosoftIdentityPublicClientApp;
            try
            {
                Console.WriteLine("Calling API completeDirectUploadSession.");
                var host = "https://filesng.yammer.com";

                var client = new RestClient(host);
                var request = new RestRequest("/v3/completeDirectUploadSession", Method.POST);

                request.AddHeader("Authorization", "Bearer " + YammerBearerToken);
                request.AddHeader("Content-Type", "application/json; charset=UTF-8");
                request.AddJsonBody(JsonConvert.SerializeObject(oRequest));

                var webResponse = client.Execute(request);
                if (!webResponse.IsSuccessful)
                {
                    throw new WebException(webResponse.StatusDescription);
                }
                return JsonConvert.DeserializeObject<YammerCompleteDirectUploadSessionResponse>(webResponse.Content);
            }
            
            catch
            {
                throw;
            }
        }

Final Step is to write our Main method which will generate AAD token, prepare the request objects and call the above created methods.

Main Method:
static void Main(string[] args)
        {
            string YammerNetworkID = ConfigurationManager.AppSettings["YammerNetworkID"].ToString();

            Console.WriteLine("Enter Yammer Group ID:");
            string groupID = Console.ReadLine();

            Console.WriteLine("Enter Full path of Local File in Drive:");
            string FileFullPathinLocalDrive = Console.ReadLine();

            string strFileName = Path.GetFileName(FileFullPathinLocalDrive);
            bool uploadSuccess = false;
            try
            {
                var oMicrosoftIdentityPublicClientApp = MicrosoftIdentityClientTokenHelper.MicrosoftIdentityPublicClientApp;
                var oSessionRequestReq = new
                {
                    filename = strFileName,
                    group_id = groupID,
                    network_id = int.Parse(YammerNetworkID),
                    is_all_company = false,
                    upload_job_id = Guid.NewGuid().ToString()
                };
                string YammerBearerToken = GetYammerBearerToken(oMicrosoftIdentityPublicClientApp);
                var createSessionResponse = CreateYammerUploadSession(oSessionRequestReq, ref YammerBearerToken);
                var uploadResponse = UploadFileBySession(createSessionResponse.url, System.IO.File.ReadAllBytes(FileFullPathinLocalDrive));
                var oYammerCompleteDirectUploadSessionRequest = new
                {
                    filename = createSessionResponse.filename,
                    group_id = groupID,
                    network_id = int.Parse(YammerNetworkID),
                    is_all_company = false,
                    is_new_file = true,
                    sharepoint_id = uploadResponse.id,
                    uploaded_file_id = createSessionResponse.uploaded_file_id,
                    uploaded_file_version_id = createSessionResponse.uploaded_file_version_id
                };
                var uploadedFile = CompleteYammerUploadSession(oYammerCompleteDirectUploadSessionRequest, ref YammerBearerToken);

                if (uploadedFile != null && !string.IsNullOrWhiteSpace(uploadedFile.web_url))
                {
                    uploadSuccess = true;
                }

                Console.WriteLine(uploadSuccess ? $"File uploaded to yammer and file url is {uploadedFile.web_url}" : $"Upload failed");
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
            }
        }

Summary

Once all the above execution is complete you should be able to see the uploaded document in Files tab of the Yammer Group as following:

Learnings

This 2-parts article, demonstrates how to register Azure AD app, generate token using that to overcome the primary issue of limitation of Yammer official API for uploading large files.

The post How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 2 appeared first on Netwoven.

]]>
https://netwoven.com/custom-development/how-to-programmatically-upload-large-files-to-yammer-using-rest-api-and-aad-token-part-2/feed/ 1
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1 https://netwoven.com/custom-development/how-to-programmatically-upload-large-files-to-yammer-using-rest-api-and-aad-token-part-1/ https://netwoven.com/custom-development/how-to-programmatically-upload-large-files-to-yammer-using-rest-api-and-aad-token-part-1/#respond Tue, 02 Mar 2021 10:40:22 +0000 https://www.netwoven.com/?p=37132 Introduction: If your organization uses Yammer as its internal socializing platform, you may be tempted to share files here to reach out to the set of audience specified by the… Continue reading How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1

The post How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1 appeared first on Netwoven.

]]>
Introduction:

If your organization uses Yammer as its internal socializing platform, you may be tempted to share files here to reach out to the set of audience specified by the Yammer group, instead of the conventional email method, which not only makes adding recipients seem cumbersome, but also formal.

Not only text messages, but using Yammer documented File Upload API files can also be uploaded programmatically to Yammer groups; however this method limits the maximum file size to only 4MB, which is very small.

This may not be a widely known limitation until someone tries to programmatically upload a large file to Yammer.

An alternative Yammer APIs which does not have any limitation on maximum file size is discussed in this article, though these set of APIs are not yet officially documented by Microsoft / Yammer team yet.

In this two-parts article we are going to create an AAD app (Part 1) and a C# console application (Part 2) to fetch AAD token by user login prompt and use the generated token to upload large files using unofficial Yammer API.

Pre-requisites

  • Any version of Visual Studio (tested well with Visual Studio 2019)
  • An active user account in Azure Active Directory having access to Yammer as well
  • Azure Active Directory (AAD) app for authentication and generating the delegated token

Azure AD App

Register a new app in Azure Active Directory

Follow the below steps supported by the following screenshot:

  1. Sign into the Azure portal from https://portal.azure.com.
  2. If your account gives you access to more than one tenant, select your account in the top right corner, and set your portal session to the Azure AD tenant that you want.
  3. In the left-hand navigation pane, select the Azure Active Directory service, and then select App registrations > New registration.
  4. When the Register an application page appears, enter your application’s registration information:
  5. Name: Enter a meaningful application name that will be displayed to users of the app. (E.g. YammerADApp)
  6. Supported account types: Accounts in this organizational directory only.
  7. Redirect URI: Type as Public client/native (mobile & desktop) and URI to https://login.microsoftonline.com/common/oauth2/nativeclient
  8. When finished, select Register.
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1

Grant permissions to Azure AD App

Open the newly created Azure AD App and perform the following steps:

  • Click on API permission from left navigation.
  • Click on Add a permission.
  • From the Request API permissions screen Click on “Yammer”.
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1
  • Click on Delegated permissions.
  • Select “user_impersonation” and click on Add Permission button.
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1
  • App would already have “Microsoft Graph” > “User.Read” permission; if not, add this permission as well following the steps above.

Enabling OAuth 2 Token generation for Azure AD App

Open the newly created Azure AD App and perform the following steps:

  1. Click on “Manifest” from left navigation.
  2. From right hand pane change the following JSON keys to true.
    • “oauth2AllowIdTokenImplicitFlow”: true,
    • “oauth2AllowImplicitFlow”: true
  3. Click on Save to save the changes
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1

Capture Azure AD Application Identifier

Open the newly created Azure AD App and perform the following steps:

  1. Click on “Overview” from left navigation.
  2. From right hand pane copy the Application ID and Directory ID and save for future uses.
How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1

Summary

Once all the above steps are completed, we are ready to use the above Azure AD app for token generation and we are going to use the same in our C# application.

Do not miss the next part of this article where we create a C# console application that can be used to upload large files to Yammer using the AAD token.

Read Next Part :

How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 2

The post How To Programmatically Upload Large Files to Yammer Using Rest API and AAD Token – Part 1 appeared first on Netwoven.

]]>
https://netwoven.com/custom-development/how-to-programmatically-upload-large-files-to-yammer-using-rest-api-and-aad-token-part-1/feed/ 0
Automate Windows Authentication Popup in Selenium using Sikuli https://netwoven.com/custom-development/automate-windows-authentication-popup-in-selenium-using-sikuli/ https://netwoven.com/custom-development/automate-windows-authentication-popup-in-selenium-using-sikuli/#respond Tue, 23 Feb 2021 13:49:05 +0000 https://www.netwoven.com/?p=37059 Introduction: Selenium can interact with the web application only. It cannot automate windows or flash-based objects. In situations where a window authentication pops up, Selenium is unable to handle it.… Continue reading Automate Windows Authentication Popup in Selenium using Sikuli

The post Automate Windows Authentication Popup in Selenium using Sikuli appeared first on Netwoven.

]]>
Introduction:

Selenium can interact with the web application only. It cannot automate windows or flash-based objects. In situations where a window authentication pops up, Selenium is unable to handle it. A third-party tool like AutoIt, Sikuli comes to rescue here.

This blog targets automating Windows authentication popup using Sikuli.

When to use Sikuli?

Sikuli is an open-source tool for test automation. Using image recognition technique Sikuli identifies the control GUI components.

It can automate anything which is visible on the screen and is useful when there is no easy access to the GUI’s internal or source code.

Advantages of Sikuli

  • Selenium web driver only supports web-based objects but using Sikuli, both windows and web-based objects can be managed
  • Selenium web driver cannot support flash objects but using Sikuli we can handle flash objects
  • Sikuli can be run on any platform Windows, Mac, Linux
  • Sikuli can automate any test which is running on remote servers
  • In some cases, if proper web elements are not available, then we can use Sikuli to identify the web elements from the screen
  • Sikuli can be integrated with any other tool like Selenium, Cucumber

How to integrate Sikuli with Selenium?

We can integrate Sikuli with selenium in two different ways, which are detailed below:

Adding Sikuli jar file directly to the eclipse IDE
Automate Windows authentication popup in Selenium using Sikuli
Adding sikuli maven dependecny in POM.xml file
Automate Windows authentication popup in Selenium using Sikuli
  • Use the maven dependency in the project POM.xml file and build the project
How to automate the Windows Authentication popup using Sikuli?
  • Open the url
  • Take the screen shot of Username and store it in a local drive
  • Take the screen shot of Password and store it in a local drive
  • Take the screen shot of Sign in button and store it in a local drive
Automate Windows authentication popup in Selenium using Sikuli
  • Add the below code snippet for automating the Windows authentication pop up
Automate Windows authentication popup in Selenium using Sikuli

Details of the code snippet

  • First need is to create an object of screen class. The screen class will allow access to all the methods used by sikuli.
  • Create an object of the pattern class. The pattern will allow the user to pass the reference of an image on which they want to perform the operation like click, double click, type etc.

When the program is executed, Selenium searches the images which were passed as a reference in the pattern object. After that, the methods available in the screen class for typing and clicking are used.

For operation like typing values in the username field, a pattern object is created for the username field and passed the reference of an image file (uname.png) to the pattern object. After that, using the method of screen class for typing the value in the username field, the action (type, click) will perform if the file exists.The same thing wil happen to typing the value in the Password field and clicking on SignIn button

Conclusion

Sikuli is a powerful tool and it works on any GUI. It can work on any platform (Windows/Linux/Mac) and can interact with virtual machines, remote desktop, and mobile simulators like Android and iPhone. Apart from automating windows based and flash-based application, Sikuli can be helpful in automating mobile testing using Android and iPhone emnulators. Sikuli sees a vast scope in use. It can be used to extract text from an image using its basic text recognition using OCR. While Selenium cannot automate CAPTCHA, using Sikuli we can automate the CAPTCHA in Selenium.

The post Automate Windows Authentication Popup in Selenium using Sikuli appeared first on Netwoven.

]]>
https://netwoven.com/custom-development/automate-windows-authentication-popup-in-selenium-using-sikuli/feed/ 0
ServiceNow Integration with SharePoint Online using Microsoft Graph – Explained https://netwoven.com/sharepoint-custom-development/servicenow-integration-with-sharepoint-online-using-microsoft-graph-explained/ https://netwoven.com/sharepoint-custom-development/servicenow-integration-with-sharepoint-online-using-microsoft-graph-explained/#comments Fri, 11 Dec 2020 17:05:20 +0000 https://www.netwoven.com/?p=36032 Microsoft Graph has opened up a vast opportunity to integrate different enterprise applications and create a seamless experience for enterprise users, where a single interface would serve content of relevance… Continue reading ServiceNow Integration with SharePoint Online using Microsoft Graph – Explained

The post ServiceNow Integration with SharePoint Online using Microsoft Graph – Explained appeared first on Netwoven.

]]>
Microsoft Graph has opened up a vast opportunity to integrate different enterprise applications and create a seamless experience for enterprise users, where a single interface would serve content of relevance from all of the enterprise applications. This is a great leap towards improving efficiency and productivity. As more and more organizations are adopting Microsoft 365, SharePoint and Teams are rapidly taking up leads as primary work interface for millions of users. For them accessing all relevant business information right from SharePoint and Teams brings that extra comfort. With Microsoft recently releasing Graph connector for Service Now, it is now possible to access Service Now knowledge articles right from the SharePoint Search interface seamlessly just like any other SharePoint content.

This article provides you with a step-by-step guidance to integrate Service Now with your Microsoft 365 tenant. Following this article, you should be able to set up a search capability for your ServiceNow articles in your Office365 Online environment.

Pre-requisites

Primarily, you must have a valid ServiceNow account and must login with your credentials to land up on the below home screen. If you are an admin user, your screen would look as below:

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

In Office 365, we are provided with Microsoft Graph Connectors, which helps to index third party data, and make it appear in Microsoft Search results. The third-party data can be hosted on-premises or in the public or private clouds. These Connectors expand the types of content sources that are searchable in your Microsoft 365 productivity apps and the broader Microsoft ecosystem.

Let us jot down the processes starting from setting up a ServiceNow Connector till getting its data in the search results of a SharePoint Online site.

Step 1: Add a Graph connector in the Microsoft 365 admin center

Sign in to your Office 365 admin center

On the left navigation pane select SettingsSearch Intelligence → Select the Connectors tab and click on the Add button

Note: You will be able to see this Connectors option only if your Tenant users and administrators have opted into a Targeted Release, as it is a new feature released in Nov 2020. Please refer here to set your tenant users to Targeted Release.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Choose the ServiceNow connector from the below screen

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Step 2: Provide the connection name and ID

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Step 3: Configure the connection settings and properties

I am opting for a Basic authentication type, hence providing my ServiceNow instance URL, username and password

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Click the Sign in option and once the sign in is successful you will be taken to the properties section. By default, all the searchable properties will be taken into consideration, but you still have an option to filter out them.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Here I am continuing with the with the default properties selection and building on it in the next section “Assign property labels” where Microsoft has given labels to different ServiceNow columns. If you wish you can change them or live with it.

Step 4: Manage the search schema

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Step 5: Refresh Settings

I have changed my incremental refresh time to 15mnts unlike the default 4 hours.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Step 6: Review and Finish the settings

Once you review and click on Finish settings, you can see your connector in the Connectors section with the Connection State as ‘Publishing’. On completion of the publish process, its status will be changed to Ready (similar to the other ServiceNow connector created some time back in the below screenshot).

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Step 7: Setup the Vertical

Once the status is marked Ready, you will get an option to create the Vertical towards the right

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Click on that link to create the vertical

Step 8: Vertical Creation

Setup the Vertical following below screenshots.
ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Choose your connector (created in previous steps)

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

I am providing ‘*’ in the KQL query as I would like all the data to be included in my search results

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained
ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Review your settings and then click on ‘Add Vertical’

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Step 9: Enable the vertical

As soon as the Vertical is created, you need to enable it, else your ServiceNow search results would not be visible in the Search results page.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

OOPS !! I cannot find any of them in the search results. And I do not see a vertical next to ‘News’ with the name ‘ServiceNow’.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

So, something more needs to be done. Let me explain.

Step 10: Set up the Microsoft Search Settings.

Go to the site settings of your SharePoint site and open the ‘Configure Search Settings’

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

As you can see below, the ‘ServiceNow’ vertical is not present.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

So. let me add it.

Click on the ‘Add’ button to add the new Vertical. Repeat the steps 7 and 8.

So your review screen should look like this.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained
ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Now you can find your Vertical along with the others (see below), but with a Disabled status.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Let me search again

I am not getting the ServiceNow results yet

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Step 11: Enable the vertical

Change the state to ‘Enabled’

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

It will take a while to get the vertical refreshed in the Search results page.

If you do not want to wait after enabling it, you can append cacheClear=true to the URL in SharePoint and Office to view the vertical immediately.

Now let search again.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

So, we still are missing something, and that is the setup of a result type.

Step 12: Add a Result Type

Go to the tab named ‘Result types’ next to the Verticals and click on the Add button.

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained
ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained
ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Add a design layout: it is a mandatory step

ServiceNow Integration with SharePoint Online using Microsoft Graph - Explained

Click on the button ‘Open Layout Designer’

And choose a design

Fill the properties with the ServiceNow columns.

Paste the JSON and click on ‘Next’

Review and finish the result type creation.

Let us search again.

Yippee !! we got the search results now. Let’s open it.

Let us view the same record in the ServiceNow portal

Hope you liked this article and hope it is helpful. Enjoy!!

The post ServiceNow Integration with SharePoint Online using Microsoft Graph – Explained appeared first on Netwoven.

]]>
https://netwoven.com/sharepoint-custom-development/servicenow-integration-with-sharepoint-online-using-microsoft-graph-explained/feed/ 2
Set Item Level Permission in SharePoint List using Power Automate https://netwoven.com/sharepoint-custom-development/set-item-level-permission-in-sharepoint-list-using-power-automate/ https://netwoven.com/sharepoint-custom-development/set-item-level-permission-in-sharepoint-list-using-power-automate/#comments Tue, 03 Nov 2020 17:49:45 +0000 https://www.netwoven.com/?p=35589 Introduction In SharePoint List, if any privilege (like Read, Contribute or Full Control) is provided to any SharePoint User or Group, then that user or people of that group enjoy… Continue reading Set Item Level Permission in SharePoint List using Power Automate

The post Set Item Level Permission in SharePoint List using Power Automate appeared first on Netwoven.

]]>
Introduction

In SharePoint List, if any privilege (like Read, Contribute or Full Control) is provided to any SharePoint User or Group, then that user or people of that group enjoy their level of access on all the items.

However, it may sometimes be required to limit user access to their own created or modified items only.

I quote an example as in the case of employee payslips. While all members of the Accounts Department (group), which generates payslips, can access all payslips for all employees of the organization at any central storage location, the payslips of any user are accessibly only to the corresponding user

Considering real world scenarios, this may seem to be a cumbersome manual process considering the volume of items for which permissions have to be uniquely provided to a specified set of users of groups, since the default behavior of each list item is to inherit from its parent (list).
One approach to break and reset permission at item level is to use Power Automate which breaks the default permission inheritance and sets up unique permission on each SharePoint list item.

Creating the Solution

Create SharePoint List

I have used another example in the article to demonstrate the case study and its solution. Begin with adding a SharePoint list named ‘ContactList’ to the Site Contents. In that ‘ContactList’, add a Manager column of type Person or Group.

Set Item Level Permission in SharePoint List using Power Automate

I am trying to associate a Manager for each contact item in the list, who will be assigned Contribute access to the item for any modification on the list item.

The next sections demonstrate the process to reaching the solution using Power Automate.

Setup the Flow

Log in with your Office 365 account to https://flow.microsoft.com/, and Create a new “Automated flow”.

Assign a name to the Flow and select the trigger as “When an item is created or modified”.

Set Item Level Permission in SharePoint List using Power Automate

Create and follow the below steps:

Step 1

Point the Flow trigger to the appropriate SharePoint Site Address and List Name.

Set Item Level Permission in SharePoint List using Power Automate

Step 2

Add the new step as “Send an HTTP request to SharePoint” action.

Note: Since this action will be used multiple times in the process, rename the action for better identification.

This action here will break the default inheritance permission on the list item.

Set Item Level Permission in SharePoint List using Power Automate

Fill the above fields as follow:

Site Address: Select the Site Address as in Step 1

Method: POST

Uri: Enter the following text:

_api/lists/getByTitle('List_Name')/items(@{triggerOutputs()?['body/ID']})/breakroleinheritance(copyRoleAssignments=false,clearSubscopes=true)

copyRoleAssignments – Specifies whether to copy the role assignments from the parent securable object.

clearSubscopes – with the clearSubscopes parameter set as true, the role assignment for all child objects will be cleared and those objects will inherit role assignments from the current object after this call.

Step 3

Next step is to fetch all the Manager Ids of a particular item from this list to modify their access to Contribute. To do so, we will add another “Send an HTTP request to SharePoint” action and rename it for identification of this step.

Set Item Level Permission in SharePoint List using Power Automate

Site Address remains the same throughout.

Use the GET method. And enter the below text as URI

_api/web/lists/getByTitle(‘List_Name’)/items(@{triggerOutputs()?[‘body/ID’]})?$select=Manager/Id&$expand=Manager

Step 4

Parse the JSON output from the “Send an HTTP request to SharePoint – Get User List” request, using the “Parse JSON” action, as shown in the below image.

Set Item Level Permission in SharePoint List using Power Automate

Paste the below text in Schema field. Schema is nothing but it is the structure and semantic of output of the previous step (i.e. Step 3). (Refer this link for how to generate schema).

{
    "type": "object",
    "properties": {
        "d": {
            "type": "object",
            "properties": {
                "__metadata": {
                    "type": "object",
                    "properties": {
                        "id": {
                            "type": "string"
                        },
                        "uri": {
                            "type": "string"
                        },
                        "etag": {
                            "type": "string"
                        },
                        "type": {
                            "type": "string"
                        }
                    }
                },
                "Manager": {
                    "type": "object",
                    "properties": {
                        "results": {
                            "type": "array",
                            "items": {
                                "type": "object",
                                "properties": {
                                    "__metadata": {
                                        "type": "object",
                                        "properties": {
                                            "id": {
                                                "type": "string"
                                            },
                                            "type": {
                                                "type": "string"
                                            }
                                        }
                                    },
                                    "Id": {
                                        "type": "integer"
                                    }
                                },
                                "required": [
                                    "__metadata",
                                    "Id"
                                ]
                            }
                        }
                    }
                }
            }
        }
    }
}

Step 5

Use the results output from the parse JSON action to get the entire users list, which will be iterated through for each Manager’s Id which can be either User Id or Group Id associated with the specific item ID.

Set Item Level Permission in SharePoint List using Power Automate

Add another action “Send an HTTP request to SharePoint” to assign the required permission to the specific item ID.

Set Item Level Permission in SharePoint List using Power Automate

Method: POST

Uri: Enter the below text:

_api/lists/getByTitle('List_Name')/items(@{triggerOutputs()?['body/ID']})/roleassignments/addroleassignment(principalid=@{items('Apply_to_each')?['Id']},roledefid=1073741827)

PrincipalId: It is taken from Id field of the Parse JSON request.

RoleDefId: 1073741827 is the ID associated with Contribute permission. Refer to the below table for roles/access and associated predefined IDs for assignment per the requirement.

Permission LevelPermission ID
Full Control1073741829
Read1073741826
Contribute1073741827

That is all, the flow is ready to run.

Verifying the Flow Solution

Create a new item on the list, select any Person or Group in the Manager field and save the item.

Set Item Level Permission in SharePoint List using Power Automate

For the item for which modification is made, check “Manage Access” to confirm that the selected person/group in the Manager field have gotten the contribute permission for that item.

Set Item Level Permission in SharePoint List using Power Automate

The post Set Item Level Permission in SharePoint List using Power Automate appeared first on Netwoven.

]]>
https://netwoven.com/sharepoint-custom-development/set-item-level-permission-in-sharepoint-list-using-power-automate/feed/ 9