Although there are a lot of projects that can be generated, you only really need the API to start working directly connected to a database. You can generate as many or as few projects as desired. The other projects are for additional functionality and completeness.
Essentially a full generation can be the API and the database project. These two projects allow you to interact, version, update your database.
The main purpose of an ORM tool is to map a CRUD layer from a database to an easy-to-use API. You can start by building a model from scratch or reverse-engineering an existing database. Once you have a model, it is your master copy. You should not make database
changes haphazardly. Make all changes to the model and re-generate. All of your database schema changes should be reflected in the generated and custom database scripts. This insures continuity and allows you to upgrade systems on different versions of your
database and API.
Your first model will reflect your database schema. Once you have the model conceptualized, you can start making more advanced modifications described below. (See
When a model is generated, numerous projects can be created in VS.NET. The first project generated is the core DAL. It is an Entity Framework based assembly that contains all the objects for your database. A DTO layer can also be created. This has a mirror
of the DAL but with lightweight objects with integrated serialization functionality to query over the wire. A most useful project is the installation library. You can run this with the .NET installation utility or include it in a larger application to provide
installation capability to users.
Data Access Layer
The Entity Framework DAL is the core of the system. It contains the concrete classes of the model. These classes can be loaded and saved to the database, as well has having inheritance hierarchies. They may be queried with LINQ syntax and everything about them
is strongly-typed. This library along with its interfaces assembly the the only generated libraries needed on the client, for database access. They are the gatekeeper for all data access. This layer implements the entities and logic for loading, saving, inheritance,
etc. It is extendable in that all classes are generated as partial classes into two distinct files. There is a user file that is generated once and will not be overwritten and a machine file that is managed by the generator and will be overwritten. There is
no need to keep track of regions that the generator will overwrite or protect, like many ORM applications. This philosophy is messy and error-prone. Your custom code will never be overwritten. It is compiled directly into the related entity and there is no
way for a user of the compiled library to know the difference between custom and generated code.
Data Transfer Layer
The data transfer layer is a set of light-weight objects that allow for the serialization of data structures. This allows for a very small data load across the wire. The generated objects mirror the DAL in that they are objects based on the model.
The installation project contains all the code to create a new database or update an existing one. When a new database is created, all tables, stored procedures, indexes, relationships, views, functions, and static data are created. The database is ready to
be used by the DAL. Each database is versioned upon creation or modification. The installation application determines the current version of the database and upgrades it to the newest version. This allows you to keep a full record of all database changes so
production can be updated with a new version atomically.
This project is compiled into a library not an application. The library can be run directly from the .NET environment by associating it with .NET tool InstallUtil.exe, or incorporated into a larger application. Running in the .NET tool means that you can execute
your database changes while developing without ever leaving the .NET environment. Incorporation in a larger application usually means a custom installation application, but it can be any application really.
When you make model changes and regenerate your code, update scripts are created that will transform the previous database format into the new database format. You may also add your custom scripts to the provided versioned SQL files that will only be run when
updating from version X to version X+1. Custom scripts can be added and scheduled to run before or after an update. Custom static data scripts can be added to the static data defined within the model. You can add any arbitrary script to the library and it
will be executed. This provides a lot of flexibility to your deployment process.
It is important to note that all generated stored procedures are compiled into this library as well. You may add custom stored procedures, views, etc if you wish. You can truly treat this library as you database upgrade path.