Software Development

Creating a Standard ASP.NET Core Web API Infrastructure with…

Technologies/Tools Used

  • .NET Core 3.1 as Framework
  • Swagger as API Frontend
  • RepoDB as ORM Tool
  • MS SQL Server 2017 as Database
  • Visual Studio 2019 as Development Interface

Create Solution and the Web Project

Create an empty .NET Core API Project:

Setup Swagger as an API Interface:

Right click the project and hit “Manage Nuget Packages”

In “Nuget Explorer” window, Select “Browse” tab and write “swagger”.

Install the library named “Swashbuckle.AspNetCore”.

Copy and paste this code block into Startup.cs => ConfigureServices(…)

services.AddSwaggerGen(c =>
   c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" });

Resolve missing dependency for OpenApiInfo object.

Go to Configure Method in same place and paste following code:

app.UseSwaggerUI(c =>
    c.SwaggerEndpoint("/swagger/v1/swagger.json", "MYAPI V1");

Navigate to Properties => launchSettings.json to modify startup page to show swagger UI when our application runs:

Hit F5 to test initial run with swagger.

Create Core and Business Projects

Now we will add one project for Business layer and one for Entities. Navigate File => New => Project and Create a Class Library Project (.NET Core):

In the end your Solution Explorer should look like this:

and your folder structure should look like this:

Finally we need to add necessary nuget libraries to use RepoDB in these newly created projects. Open Nuget Manager and add RepoDB package to Core Project:

And Install RepoDB.SqlServer Package to Business Project.


I aim to keep it simple but not so simple as standard examples in web. I would also like to show you the minimum viable implementation of RepoDB. So in order to achieve that I am going to follow the following scenario:

  • Create a DB with Two Tables: Products and ProductGroups
  • Create an Entity for those two tables: Product and ProductGroup
  • Create a Business Layer for Product and Product Group Repositories
  • Create Controllers for Product and ProductGroup



Create a SQL Database and name it RepoDBTestDB. Add following tables with following properties:

CREATE TABLE [dbo].[ProductGroups](
	[Id] [bigint] IDENTITY(1,1) NOT NULL,
	[Code] [nvarchar](50) NOT NULL,
	[Name] [nvarchar](255) NOT NULL,
	[AuditStatus] [int] NOT NULL,
	[CreatedDate] [smalldatetime] NOT NULL,
	[CreatedBy] [nvarchar](50) NOT NULL,
	[LastUpdatedDate] [smalldatetime] NULL,
	[LastUpdatedBy] [nvarchar](50) NULL,
	[Id] ASC

ALTER TABLE [dbo].[ProductGroups] ADD  CONSTRAINT [DF_ProductGroups_AuditStatus]  DEFAULT ((0)) FOR [AuditStatus]
CREATE TABLE [dbo].[Products](
	[Id] [bigint] IDENTITY(1,1) NOT NULL,
	[Code] [nvarchar](50) NOT NULL,
	[Name] [nvarchar](255) NOT NULL,
	[Price] [float] NULL,
	[ProductGroupId] [bigint] NOT NULL,
	[AuditStatus] [int] NOT NULL,
	[CreatedDate] [smalldatetime] NOT NULL,
	[CreatedBy] [nvarchar](50) NOT NULL,
	[LastUpdatedDate] [smalldatetime] NULL,
	[LastUpdatedBy] [nvarchar](50) NULL,
	[Id] ASC

ALTER TABLE [dbo].[Products] ADD  CONSTRAINT [DF_Products_ActiveStatus]  DEFAULT ((0)) FOR [AuditStatus]

ALTER TABLE [dbo].[Products]  WITH CHECK ADD  CONSTRAINT [FK_Products_ProductGroups] FOREIGN KEY([ProductGroupId])
REFERENCES [dbo].[ProductGroups] ([Id])

ALTER TABLE [dbo].[Products] CHECK CONSTRAINT [FK_Products_ProductGroups]

Fill with sample Data:


(also available as a database project. See end of this post)

Core Objects

After we finished database level tasks, we can now concentrate on Core/Entity design.

There are really various methods/approaches to handle Entity designs. I will intentionally stay away from any discussions regarding it and therefore will use the existing data structure directly for the presentation too, which means I am not going to create a “Model” (or View Model) and every entity will have the same structure as it is kept in a database.

Anyway. Let us concentrate more on RepoDB. RepoDB has a built-in entity mapping functionality. If your classes and properties have same names with your table, it would be able to map the data to your classes without any other extra configuration. However if you have different names, then you have to define a mapping configuration. To map your classes to the data we either decorate our classes by using RepoDB’s built-in MapAttribute or use its fluent mapping configuration. I will be using Decoration-Method both for Product and ProductGroup entities.

First I’ve created a Base Entity class to gather common properties into a single class to avoid repeating everywhere. I’ve also decorated Id property as a Primary key with Identity values. Properties that I’ve set here will be inherited by child classed which are derrived from this class.

public class BaseEntity
    public long Id { get; set; }
    public DateTime CreatedDate { get; set; }
    public string CreatedBy { get; set; }
    public DateTime? LastUpdatedDate { get; set; }
    public string LastUpdatedBy { get; set; }

Primary and Identity (which is pretty self documenting) are two property attributes of RepoDB which marks decorated column as a Primary key and sets its the identity specification to 1,1.

Now we create the Products class and inherit from BaseEntity.

public class Product: BaseEntity
    public string Code { get; set; }
    public string Name { get; set; }
    public float RetailPrice { get; set; }
    public long ProductGroupId { get; set; }
    public int AuditStatus { get; set; }

As you see, in order to change the standard mapping we used MapAttribute. This attribute can be applied both for classes and for its properties. Product Entity in my model will correspond to Products table in my database and RetailPrice property in my class will map to Price column in that table respectively.

I’ve also created ProductGroup class as follows:

public class ProductGroup:BaseEntity
    public string Code { get; set; }
    public string Name { get; set; }
    public int AuditStatus { get; set; }

In the end our Core project structure should be like that:

Business Layer

I’ve created three different sub folders to contain my Repositories, the Interfaces for common things such as Dependency Injection and its default implementations. I will be allowing injection of connection string and some other connection properties by the caller. So that I’ve created an IAppSettings interface with following properties:

public interface IAppSettings
    public string ConnectionString { get; set; }
    public int CommandTimeout { get; set; }

And also created a class under the Implementations folder to hold values for default implementation of the IAppSettings interface, as following:

public class AppSettings : IAppSettings
    public string ConnectionString { get; set; }
    public int CommandTimeout { get; set; } = 600;

Now I will create two repositories under Repositories folder. One for managing Products and second for managing ProductGroups.

RepoDB enables us to use two standard classes to implement Repository Pattern in our model. We can decide to implement Repository pattern as a generic implementation of “fancy data access method” or we can design our own aggregates. Or even better: we can employ both methods in same model. For managing a basic table (where one Aggregate Root is one single table) we can easily inherit from RepoDB’s default data access implementation: BaseRepository<TEntity,IDbConnection>.

public class ProductRepository : BaseRepository<Product, SqlConnection>
   //... Constructor

There are several constructors for this repository. I don’t want to dive into detail of various constructors however I strongly encourage you to read the details from RepoDb’s documentation (which is pretty good btw). I will be using the one with connection string and the command timeout.

public class ProductRepository : BaseRepository<Product, SqlConnection>
    IAppSettings settings;
    public ProductRepository(IAppSettings settings) : base(settings.ConnectionString, settings.CommandTimeout)
        this.settings = settings;

public class ProductGroupRepository : BaseRepository<ProductGroup, SqlConnection>
    IAppSettings settings;
    public ProductGroupRepository(IAppSettings settings) : base(settings.ConnectionString, settings.CommandTimeout)
        this.settings = settings;

When Business Layer is completed, finally it should look like this:

API Controllers

Normally it is pretty straight forward to inject implementations, add a controller and call the respective respository to expose to the caller. Since we’ve done with the Swagger, it is also self documenting. Therefore I plan to show you a basic implementation for Product and ProductGroup controllers and leave the professional implementation (with dependency injection of repositories, with repository factory and/or using IUnitOfWork) for another post.

Let us start with the ProductsController. Simply open the Api project, navigate to Controllers and add a new controller named ProductsController and paste the following code into the controller directly:

public class ProductsController : ControllerBase
    private IOptions<AppSettings> settings;

    public ProductsController(IOptions<AppSettings> settings)
        this.settings = settings;
    public ActionResult Get()
        var r = new ProductRepository(settings.Value);
        return new ObjectResult(r.QueryAll());

    public ActionResult Get(long id)
        var r = new ProductRepository(settings.Value);
        return new ObjectResult(r.Query(p=>p.Id==id));

    public void Post([FromBody] Product value)
        var r = new ProductRepository(settings.Value);

    public void Put(long id, [FromBody] Product value)
        var r = new ProductRepository(settings.Value);

    public void Delete(long id)
        var r = new ProductRepository(settings.Value);

Also add the ProductGroupsController.

Before we run our project we have to do two things. Initialization of RepoDB SqlServer and an injection of connection string information into our repositories by looking it from our appsettings.json.

To initialize RepoDB.SqlServer, go to startup.cs and call initialize method of RepoDb SQL Server Bootstrap inside the constructor.

To inject the connection string, we have to add json formatted deserializable text version of AppSettings class that we’ve previously created into the appsettings.json file. We have to update the connection string to point our database.

And add it into development configuration of the appsettings.json as well.

"SettingsFromApi": {
    "ConnectionString": "Server=.;Database=RepoDBTestDB;Integrated Security=SSPI;",
    "CommandTimeout": 800

Now go to Startup.cs and paste following code block for your app to read the config from the settings json file, deserializes it and injects it as a part of standard IOptions interface of .Net Core Framework:

public void ConfigureServices(IServiceCollection services)

    services.AddSwaggerGen(c =>
        c.SwaggerDoc("v1", new OpenApiInfo { Title = "RepoDB API", Version = "v1" });



Finally the folder structure of your api project should look like that:

Run your project and enjoy ­čÖé

Complete code for this post could be found at:

Software Development

ORM Trials

I am not a big fan of EF/Entity Framework. To be honest, mostly I hate EF. I have many valid reasons to hate it however I really don’t want this post to be the show of hatred against EF.

Until today I was using my own Data Access Framework. It was not the best of his class but it did its job most of the time and it was performant enough to serve as an infrastructure in many projects which are still actively being used by thousands of users everyday. However it is old and it is getting older. With new developments in technology, the rise of the .NET Core and Microsoft’s shift against Standard .NET Framework steered me to update my good-old Data Access Framework to embrace the new infrastructure. This library was so much shaped and forged with real world scenearios (most of them were with tears and pain of course) by the time that upgrading my old friend to rise and shine in this new brave .NET Core world would be very costly/expensive. Consequently (in order to not to invent the wheel twice) I started my journey of searching for a new library which is capable of accessing data in an ACID manner, mapping the entities, having a cache mechanism etc. So I would say I needed something: Simple, fast, easy, flexible and capable…

First Migraine with EF Core

I’ve read couple of articles that states EF Core is way much faster than his standard brother: EF 6. Decided to test it. Followed Code-First strategy. Created my classes, decorated them with some TypeMappings (and also used some configurations) all fine. However it took couple of hours to figure out where I modify the connection string and just another hour to understand one-to-many etc relationships and just another couple of days to understand Migrations… And lost hours trying to understand that “it is not an error, indeed it is a feature” things such as standard conventions… And this fast-pace-based (assume it agile) development strategy of Microsoft who is delivering new versions for every other day makes it really impossible to find correct solution to your actual problem. Just because a random internet post (or even the official documentation) which seems to be a solution is probably outdated or no longer supported. (Could you please imagine the frustration of a developer when s/he tries to find a solution to the actual problem, thinks s/he found the solution and the found solution was indeed outdated. Thug Life…)

Indeed I was occupied the with that estranged feeling of “not knowing what exactly is going under the hood” or “Hey! An Error Occured with following description: An Error Occured”. Every error message which were generated by my application was somehow related with EF, a false or outdated configuration of EF or just an untold story of EF. Learning curve was so steep that finally I quit.

(And just before I start telling my story with dapper, I would like to hit EF again: having such migration (in)capability in Code-First approach is the most terrible thing that I’ve encountered so far. Could you please imagine how stupid to have a new class for every tiny change in model??? You will probably end up with 100000000 cs files under your Migrations folder unless you do a merge!)

Dance with Dapper

Like every frustrated developer, I’ve started another round of a quest to find a flexible solution. And like every developer I’ve been acquainted with Dapper. Dapper was being marketed (not in monetary terms) as the greatest and lightweight ORM who were suffering under EF or any other non-performant hefty ORM tool. And I must admit: It is freakingly easy and lightweight. But…

Maybe it is too lightweight. As lightweight as writing your own query as hard coded. However idea of manually writing a sql query in 2020 is pretty terrifying… Just terrible memories of past arose in my mind where we were haunting those manual SQL Statement changes within the classes….

To overcome this ÔÇťun-civilizedÔÇŁ thing, there are other addon like extension libraries. These libraries are pretty neat which enables developers to access and play with their data easily. Hassle-free.

I mean promising… Dapper is really promising. It also keeps its word to be the easiest ORM in the market. IÔÇÖve used Dapper in some projects. It is still working flawlessly. But I slowly started asking some advanced questions (like multiple queries) and dapper fell short.

Found a lovely thing! RepoDB

So far so good. Easy to use. Flexible enough. Fairly extendable. It is more like Dapper than EF. But coverage and hard-coded-greediness is not present. Very well documented. To be honest, it was pretty fast too. I gave it a try and suggest you to give it a try.

I will keep you posted…

Business Intelligence

How to Install both 32-bit and 64-bit Microsoft.ACE.OLEDB.12.0 Providers…

There are probably no one who uses Excel as a Datasource and hadn’t been haunted by that famous error:

The ÔÇśMicrosoft.ACE.OLEDB.12.0ÔÇÖ provider is not registered on the local machine.

or better to say: this, when you try to install that provider:

You cannnot install the 32-bit version of Microsoft Access Database Engine 2016 because you currently have 64-bit Offce products installed. If you nant to install 32-bit Microsoft Access Database Engine 2016, you veill first need to remove the 64-bit installaton of Offce products. After uninstalling the following product(s), rerun setup in order to install 32-bit version of Microsoft Access Database Engine 2016:
Microsoft Access database engine 2010 (English), Office 16 Click-to-Run Extensibility Component

and this is how that haunted message looks like:

Welcome to the club ­čÖé

It is being suggested in that message that you should either have a fight with your system admin or you have plenty of time to un- and re-install complete Office in order to have the correct/desired version of the provider.

If you are also developing SSIS Packages, you are also probably aware that Visual Studio is an 32 bit application and you will loose the Design-Time support if you decide to continue with the 64-bit version of the Office.

I mean, until today….

Today you will learn how to install both versions in the same machine! And indeed a very easy trick.

(drum roll please!!)

  1. Open Powershell/Command Prompt
  2. Locate the Access Database Engine.exe that you want to install
  3. add “/quiet” switch at the end and execute
  4. Tadaaaa! That’s it.

Reason is really funny. Error message that you’ve received is caused by the setup bootstrap package which checks if another version of the same product is installed or not. However that “/quiet” swtich prevents it to bark back at you.

Now I have a design time support of Excel files in SSIS and also I can enjoy the benefits of 64-bit Version as well. So… But…

– Is it supported (to have both versions on same machine)?
– I dunnow!
– Does it work anyway?
– Awww f. yes!

Note that: /quiet switch only works for versions Office 2016 and later . For previous versions you might have a look at /passive switch and a registry edit.


├╝ber etwas/das Gras wachsen lassen

Yine g├╝zel ve g├╝nl├╝k hayatta kullan─▒lan bir Almanca “Redewendung”.

“├╝ber etwas/das Gras wachsen lassen”

Serbest ├ževirisiyle: “Bir ┼čeyin ├╝zerinde ot bitmesine izin vermek / ot bitmesini beklemek”

Manas─▒ a├ž─▒k, kendili─činden de anla┼č─▒l─▒yor. Yani “istenmeyen/naho┼č bir durumun ├╝st├╝n├╝ kapatmak, k├╝llendirmek, so─čumas─▒n─▒ veya ge├žmesini beklemek” olarak a├ž─▒klanabilir.

├ľrnek baz─▒ c├╝mleler: (Yine serbest ├ževirisiyle)

Die ganze Sache ist noch viel zu frisch. Bevor du sie um Verzeihung bittest, solltest du erst etwas Gras ├╝ber die Sache wachsen lassen.

Her ┼čey daha hala ├žok taze. Ondan af dilemek i├žin ├Ânce baz─▒ ┼čeylerin ge├žmesini beklemelisin.

Im Moment ist mir dieser Vorfall total peinlich. Aber wenn mal ein bisschen Gras dr├╝ber gewachsen ist, werde ich bestimmt ├╝ber diese Sache lachen k├Ânnen.

┼×u anda benim i├žin bu durum t├╝m├╝yle utan├ž verici. Ama biraz zaman ge├žtikten sonra eminim ki bu ┼čeylere g├╝lebilece─čim.

Manchmal ist es wirklich sinnvoll, einfach zu warten, bis Gras ├╝ber die Sache gewachsen ist.

Bazen en basitiyle bu durumun k├╝llenmesini beklemek ger├žekten en mant─▒kl─▒s─▒.

Other Tips

Tip: Increase the Font Size of SQL Management Studio

Absolutely stupid tip but it is real. With the increase of high-resolution graphic cards (and the age of course) everything is getting smaller and sometimes it is more healthier and better for our old eyes to read results in bigger fonts. On the other hand it is especially required when you are doing a presentation or sharing your screen with someone and it is like an ant party that happens in this grid ­čÖé

Recently I’ve experienced such a thing and desired to increase the font size of the SSMS. I am used to zooming in and out by using ctrl+mouse scroll but realised that this technique only applies to the query part of the screen in SSMS.

You can also set the zoom level from the combo on the bottom-left side of the query screen.

But it is not possible to increase the size of the results grid from anywhere in this screen and ctrl+mouse scroll also doesn’t work. The remedy is hidden under Tools => Options ­čÖé

Just follow menu Tools => Options and in options dialog expand Environment => Font and Colors group. In “Show settings for:” section, select “Grid Results” and adjust the text and font size as desired. (But please no Comic Sans :))

Important: You should have to restart SSMS before this change takes effect.

One more last-minute tip: You can change the whole font of the User Interface. Just follow this:

For more tips like above, you can visit

Power BI Stack

How to add multiple files at once to model…

When there are multiple files to import there is an easy way to import these files at once to Power BI Desktop Application. You should utilize the Folder import method of Get Data functionality.

To do this just follow “File > Get Data > More” and select Folder on right side or just write Folder to search box on top left corner.

Select Folder as Data source to import all files in that folder.
Select Folder as Data source to import all files in that folder.

You will be able to select, load and combine data from different files together and create reports. This is especially useful when an automation/application creates a text/csv output to a folder regularly and you have to combine all data to conduct an analysis. This functionality saves you from painful and lofty ETL process just to load the data to a database and consume it again to create a report. Power BI Desktop Folder import functionality is a life saver.

Scenario 1: Multiple files categorized/aggregated per entity such as Country.

In this scenario assume that our automation exports sales data to a file server folder grouped by Country. So that we have a seperate file for each country.

Now we will select this folder and let Power BI Desktop Folder import functionality to do its best. Just follow “File > Get Data > More” and select or find “Folder” in right section. Click “Connect”

Power BI Desktop will open Folder Browser Dialog. Navigate to the folder where your csv files were exported and click OK.

Power BI Desktop will display an intermediate window which summarizes the content of the selected folder. You can also see file properties of each content.

On bottom of this screen there are three main buttons and five functionalities:

Load: Loads the file structure (without combining all files to one file) to Power BI Desktop but will not import the actual data.
Transform Data: Loads the file structure to Power BI Desktop and opens Power Query Editor to edit/transform it.
Combine & Load: Combine all data and load into model.
Combine & Transform Data: Combine all data and load into model for editing.

Let us continue with Combine and Transform Data. This will open a new window for us to configure combination options for those files we’ve selected.

Let us take the first file as a structure (it would not be a problem when all our files have the same structure) and continue. Power BI Desktop will open Power Query Editor (because we’ve selected Transform) for our customization requests. After we’ve done with the customizations we can simply click on Close & Apply (or just Apply) in Power Query Editor to proceed with data load.

There could be errors raised during this loading process. If any errors occured it will be reported on the same dialog.

When you click on “View Errors” you will be redirected to list of errors where you can check and see the details of those errors.

If everything goes well, you will be able to see this data in Power BI Model.

CSV Files Used in this Post:


schlag mich tot / schie├č mich tot

G├╝nl├╝k hayatta duyabilece─činiz Almanca s├Âz ├Âbeklerinden biri:

schlag mich tot ya da schie├č mich tot

Beni ├Âld├╝r ya da beni vur olarak ├ževirilebilir ama anlam─▒ ve kullan─▒m─▒ olduk├ža farkl─▒. G├╝nl├╝k hayatta T├╝rk├že’deki “at─▒yorum”,”bilmem ka├ž”,”ne bileyim”,”ne desem yalan”,”yanl─▒┼čsam d├╝zelt” gibi kullan─▒mlarla benzer anlam ta┼č─▒yor.

(Es ist meist ein Ausruf, wenn einem etwas nicht einf├Ąllt, oder gerade entfallen ist.)

Sie wohnt in der Bahnhofstra├če, Hausnummer schlag-mich-tot
(O Bahnhostra├če’de numara bilmem ka├žta oturuyor.)

Gleich l├Ąuft das Spiel: Deutschland gegen Schlagmichtot.
(Almanya ile bilmem kimin oyunu/ma├ž─▒ ba┼člamak ├╝zere.)

Du kannst dieses Paket mit DHL, DeutschePost, Hermes und schlag mich tot liefern
(Bu paketi DHL, DeutschePost, Hermes ya da ne bileyim neyle g├Ânderiyorsan g├Ânderebilirsin i┼čte.”)

Weitere Infos:

(Ayr─▒ca bu siteyi acayip tavsiye ederim, krald─▒r Redewendung konusunda)

Power BI Stack

How to use MS Access Database File in Power…

Power BI Online is not capable of importing Microsoft Access Database file (both mdb and accdb). To use MS Access DB as a datasource to conduct analysis, we should have to use Power BI Desktop first.

When you want to import file in Power BI Online Services, you see only supported data types and MS Access Database is not listed, therefore is not supported as a local file to be imported to be used as a datasource.

You should open Power BI Desktop application and follow “Get Data > More > Database > Access Database” menu path and create reports by using it and saving it as PBIX file. Then you will be able to publish this file to Power BI Online or connect to PBIX file by using “Get Data > Files” path in Power BI Online Service and create your reports on top over it.

To do this, you can follow these instructions below:

  1. Open Power BI Desktop Application
  2. Follow “Menu > File > Get Data > More” and it will open a new window with full of supported data sources.

  3. Find “Access Database” from list on the right or by using Search on top left.

  4. Choose your desired MS Access database file.

  5. Now you will see the entire list of tables in that selected database file. Now you can select and import tables as desired.

    (If you got error regarding Access Database Engine Driver incompatibility in this step see Note 1 at the end of this post)

  6. After you are done with the data import, design your report (or just create an empty one) and hit Publish or Save it as PBIX file in your local drive (later to upload it manually from portal)

  7. You will be able to see your newly created dataset in “Datasets” section of your Power BI Online instance.

  8. That’s it. Now you will be able to create your reports over it.

#Note 1:

When you were trying to connect to a Access Database and got following error:
Details: "Microsoft Access: The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine. The 64-bit version of the Access Database Engine 2010 Access Database Engine OLEDB provider may be required to read 'Northwind.MDB'. To download the client software, visit the following site:"

It means the driver of the datasource that you are trying to connect and the application that you are using has different system architecture. For instance you have installed Power BI Desktop 64 Bit but trying to connect to Access database with 32 Bit Access Driver vice versa. To keep it simple: Either use Power BI Desktop 64 bit and Access Driver 64 bit together or Power BI Desktop 32 bit and Access Driver 32 bit. (Note that Access Database Driver and MS Access Application are totally different things.)

Another important point is:

If you already installed Microsoft Office as 32 bit, do not install Access Driver 64 bit. You will not be able to use two versions of same product in same box/computer and would probably get an annoying error like this:

ACE engine bittedness error

Best Practice is to install Power BI Desktop inline with the architecture of your Microsoft Office.

However if you want to use 32 bit Power BI Desktop Application, you will experience out-of-memory problems on large datasets.

If your Power BI Desktop Application architecture and Microsoft Office Installation is already 64 bit and you’ve got the connection error, it is highly probable that your computer has Access Database Driver 32 bit. Just download and install 64 bit Microsoft Access Database Engine 2010 Redistributable executable (AccessDatabaseEngine_X64.exe) from following link:

For more information, follow the link in the original message for troubleshooting connectivity issues.

Power BI Stack

Power BI Desktop – Change Views: Report, Data, Model

There is a simple and effective way to toggle between Report, Data and Model views in Power BI Desktop Application. You can use left hand side toolbar menu to achieve this quickly.

Power BI Desktop Report ViewÔÇő
Power BI Desktop Report and Other Views

Default view is Report View. Power BI Desktop Application starts with this view. It is used for designing and viewing a report.

Second view is called Data View. In this view you can browse your underlying data in your data set. Select this view and change Fields on the right side to view sample data.

Power BI Desktop Data ViewÔÇő
Power BI Desktop Data View

Third view is Modeling View. In this view you can do modeling tasks, design the data model of your report, see and define the relationships between entities and change other metadata related information.

Power BI Desktop Model ViewÔÇő
Power BI Desktop Model View