Administration Using SQL Server 2008 Tools
Let ’ s just jump in and get our feet wet, shall we? In Chapter 2 you used SSMS to view the objects found in an Analysis Services 2008 database. We ’ ll start here on a similar footing: 1. Launch SSMS from All Programs Microsoft SQL Server 2008 SQL Server Management Studio.
2. Using Object Explorer, connect to the Analysis Services instance.
3. Open the Databases folder.
You will see a tree view of those databases you have saved on the server to date, as shown in Figure 7 – 1 . One of those databases should be titled AnalysisServices2008Tutorial — you should take a moment to review the tree nodes and what they contain because you will be learning the administrative tasks associated with those objects.
Managing Analysis Servers
SSMS, the integrated management environment for SQL Server 2008 products, provides you the flexibility of managing several Analysis Services instances. In this chapter we use the word “ server ” to denote an instance of Analysis Services, and “ servers ” to denote one or more. If you have a set of production servers that are being used by customers and a set of test servers that are being used for development and testing purposes, you typically want to manage them differently. The most logical thing is to group these servers. Using the Register Servers window of SQL Server Management Studio, you can group a set of Analysis Services servers to form a Server group as shown in Figure 7 – 2 . You can register Analysis Services servers and organize them into groups using the New Server Group and New Server Registration dialogs that can be launched by right – clicking the Local Server Groups folder under the Analysis Services folder in the Registered Servers window of SSMS.
Some of the common tasks of starting, stopping, restarting, and configuring Analysis Services servers can also be accomplished from the Registered Servers window. You can right – click the specific Analysis Services instance and choose the appropriate operation. In addition, you can switch to the Object Explorer window of the connected SSAS instance, or launch the MDX query editor or SQL Server Configuration Manager dialog from this window.
Once you are connected to an Analysis Services server in the Object Explorer window, you can accomplish various administrative tasks on that server, such as creating new databases, providing permissions, processing objects, and moving databases from test servers to production servers. First and foremost for the Analysis Server admin is providing access permissions to the users who will be administering the server. The following steps show how to add a user as an administrator of an Analysis Services server by making them part of the object called Server Role:
1. In the Object Explorer window right – click the Analysis Services instance and select Properties. You will now see the Analysis Services Properties dialog.
2. Click Security in the page as shown in Figure 7 – 3 .
3. Click the Add button to add a user to the Analysis Services administrators group. You can add domain users, local machine users, or groups as part of the administrator group for Analysis Services. If your user is a local user you can specify < machinename > \username or just the username to add the user to this server administrator group.
Another important management task is to set appropriate Analysis Server properties so that Analysis Services performs optimally. You can do this using the Analysis Server Properties dialog shown in Figure 7 – 4 . Analysis Services needs to be restarted for certain properties to take effect. This is indicated by a “ yes ” in the Restart column for those properties in the Analysis Services Properties dialog. Some of the most important properties involve control of parallelism for processing and querying and changing the read buffer size for faster query response time. Equally important are the maximum amount of memory used by the Analysis Services processes, the maximum number of connections to the server, and the ability to turn certain features on or off. You learn some of these properties in this chapter and others in Chapter 14 . The properties dialog has a checkbox that enables you to view and modify the advanced properties of the Analysis Services server. Adding users to the Server role or Database role and setting properties are considered part of securing your Analysis Services server. You learn more about managing security at the end of this chapter.
Managing Analysis Services Objects
Several management tasks can be performed on Analysis Services objects. Some of the most important tasks are processing cubes and dimensions, providing access permissions to various objects within a database, managing the partitions of a cube based on usage, and adding assemblies to databases. Even though the SQL Server Management Studio provides a great interface to manage Analysis Services 2008 and abstracts all the internal details, it is beneficial to understand the underlying operations that take place when you perform the management operations. Knowledge of these server internals gives you an edge in better understanding its operation and helps you more effectively manage the server when unforeseen problems occur.
All communications to Analysis Services is through XML for Analysis (XMLA). The management tasks executed through SSMS use the management object model AMO (Analysis Management Objects), which in turn sends XMLA Execute commands to the Analysis Services instance. You will see some of the commands sent to the Analysis Services server when performing management tasks in this chapter.
SQL Server Analysis Services 2008 allows a server administrator to create databases and assign database administrative permissions to a user. The following steps show how to do this:
1. In the SSMS Object Explorer, right – click the Databases folder and select New Database as shown in Figure 7 – 5 .
2. Enter a new database name called DBATest as shown in Figure 7 – 6 and click OK. SSMS sends an XMLA command to SSAS to create the new database called DBATest. SSMS then refreshes the Databases folder by retrieving the list of Databases from SSAS. You should see the DBATest database as shown in Figure 7 – 7 . If you are an administrator of SSAS, your account is a member of the Analysis Services server administrator role as seen in Figure 7 – 3 . If you want to provide a user with database administrator privileges and not Analysis Services server- wide privileges, you need to provide appropriate permissions at the database level. Follow the next steps to provide database administrator permissions for a user.
3. Expand the folder DBATest. You will see the various folders under DBATest.
4. Right – click the folder Roles and select “ New Role ” as shown in Figure 7 – 7 .
5. In the Create Role dialog, check the “ Full control (Administrator) ” checkbox (shown in Figure 7 8 ) to provide full database administrator privileges.
6. Select the Membership page in the dialog as shown in Figure 7 – 9 .
7. Click the Add button on the Membership page to bring up the Select Users or Groups dialog.
8. Enter the user or users for whom you want to provide database permissions as shown in Figure 7 – 10 and click OK. You should now see the user you specified in the list of users who will have database permissions listed in the Create Role dialog.
9. Click OK in the Create Role dialog.
You have successfully created a database called DBATest and provided full database permissions to a specific user. The user listed under the role Role will have full permissions to modify any of the objects that are part of the database DBATest including deleting the database. This user does not have permissions to perform any database operations outside of the DBATest database unless the same user is part of the Analysis Server administrator role. To create data sources, Data Source Views, cubes, dimensions, and mining models we recommend creating/modifying the objects in the DBATest using the online mode in Business Intelligence Development Studio (BIDS). You learn to work with SSAS databases using the online mode of BIDS later in this chapter.
Processing Analysis Services Database Objects
One of the important jobs of an Analysis Services DBA (database administrator) is to process the objects (such as Cubes, Dimensions, and Mining Models) in an Analysis Services database. Analysis Services 2008 provides fine – grain control to the Analysis Services DBA to process the objects within an Analysis Services database using the Process dialog. You can launch the Process dialog by right – clicking the object folders such as Cubes, Dimensions, and Mining Structures — this works just as well on individual objects or groups of objects too. Based on the location from which the Process dialog is launched, the options for processing the object or group of objects will vary. In addition to this you can select an object and launch the Process dialog. To process the database AnalysisServices2008Tutorial, do the following:
1. Right – click the database AnalysisServices2008Tutorial and select Process as shown in Figure 7 – 11 .
You will see the Process dialog as shown in Figure 7 – 12 . This dialog shows the name of the object to be processed along with the type of object. Several processing options are available for each object. The default option for the database object is Process Full. As the name implies, the Process Full option allows you to process the selected object completely even if the object had been processed earlier. It will clear any data that was processed earlier.
2. When you click the OK button the Process dialog sends an XMLA command to the Analysis
Services instance to process the selected object. If you click on the Script button shown in Figure 7 – 12 and then select Script Action to New Query Window, you can see the Process XMLA command to be sent to the Analysis Services instance. You will see the following script command:
< Batch xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” > < Parallel >
< Process xmlns:xsd=”http://www.w3.org/2001/XMLSchema”
< Object >
< DatabaseID > AnalysisServices2008Tutorial < /DatabaseID >
< /Object >
< Type > ProcessFull < /Type >
< WriteBackTableCreation > UseExisting < /WriteBackTableCreation >
< /Process >
< /Parallel >
< /Batch >
3. Click OK in this dialog to process the AnalysisServices2008Tutorial database. When you click OK the Process dialog uses AMO to send the Process command to the Analysis Services instance.
The Process XMLA script contains several commands that are interpreted by Analysis Services. Because the medium of communication to Analysis Services is an XMLA request, the script is embedded within SOAP Envelope tags. This script can be executed from the XMLA editor within SQL Server Management Studio. SSMS adds the appropriate SOAP envelope tags to send the script to Analysis Services. The commands in the script are Batch, Parallel, and Process. The Process command is part of a set of commands that manipulate the data in Analysis Services. These commands that change the data in Analysis Services databases are called the DML (data manipulation language). The Batch command allows multiple commands to be executed within a single statement. The Parallel command allows you to instruct the Analysis Services instance to execute all the commands within the command in parallel. The Process command is used to process an Analysis Services object and needs several properties such as DatabaseID, Process Type, and processing options (not shown in the above XMLA script) such as parallelism for processing objects, and actions to be taken during dimension key errors that can be changed using the Change Settings button in the Process dialog . You learn the processing options provided by the Process dialog in this chapter.
As mentioned earlier, when you click OK in the Process dialog, a Process command with appropriate options is sent to the Analysis Services instance. This command requests the server to process the database. When processing the objects within a database, the server needs to read data from the data source, which is done by issuing queries to it. You will now see the Process Progress dialog that shows details of each processing operation on the server. As you can see from Figure 7 – 13 , the operations on each object within the database that is being processed are reported along with the timing information and whether the operation succeeded or failed. You can also see the query sent to the relational data source to retrieve the data. The detailed information returned from Analysis Services is very helpful if you need to investigate any issues in processing including the performance of processing an object.
Once all the objects have been processed you will see the results of the processing command. If all the objects were successfully processed, you will see Process succeeded in the status as shown in Figure 7 – 13 . If there were errors during processing, the status bar will show an appropriate message. The operations that resulted in an error are shown in red in the tree view of the Process Progress dialog. You can drill down into the details of the processing to understand the reasons for failure.
Several operations take place in the preceding processing command. All the objects within the database are processed in parallel based on the settings of the Analysis Services instance. If there are dependencies, the dependent objects are processed first. For example, the dimensions that are part of a cube need to be processed before the cube can be processed. Analysis Services processes all the objects of the database under a single transaction. What this means is that if one of the objects failed during processing, the remaining objects will not be processed and the effects of any previous operations will be rolled back. For example, if all the dimensions of a cube were successfully processed and if there were errors while processing the cube, the processing of the dimension objects will be rolled back. Once all the objects have been successfully processed, the server commits the transaction, which means that the objects are marked as processed and are available for querying.
Assume an Analysis Services object has been processed and is being queried by users. At the time users are querying the object, you can initiate processing on the same object. Because a version of the object is currently being queried, Analysis Services stores the uncommitted processed object in a temporary file. At the time of commit, the server first ensures that the user is not using the objects, removes the previous version of the processed objects, and then marks the temporary files as primary. You see this in detail in the following section.
Processing a Cube
An Analysis Services database can contain several cubes and dimensions. You have the flexibility to control the processing of individual cubes and dimensions by launching the Process dialog from appropriate cube or dimension objects. There are several processing options for processing a cube, as shown in Figure 7 – 14 . All of the same processing options available for partitions and measure groups are available for the cube because a cube is a collection of measure groups, which in turn is a collection of partitions.
When a cube is created you will typically do a full process ( Process Full in the Process dialog) of it so that you can browse the cube. Usually the cube structure will not change after the initial design is completed. In this case, you will be processing in order to get additional fact data that you would want to add to the cube. For example, you might have a Sales cube that you have created and you might be getting sales fact data from each store every month. Processing the entire cube whenever new data comes in will take a considerable amount of time, causing end users to have to wait for a long period to see the most up – to – date data. Analysis Services 2008 provides you with an option to process only the new fact data instead of the entire cube. This is called incremental processing. In order to add new fact data to the cube you can add a new partition to the cube and process that partition. Alternately, you can use the Process Incremental option in the Process dialog and specify the query that provides the new fact data that needs to be processed. Process Incremental is a common management task for data warehouses. If you specify the Process Default option in the Process dialog, the server checks for all the objects that have not been processed and only processes those objects. If the cube data has been processed and if aggregations and indexes are not processed, then those are processed.
When you choose the Process Full option for processing a cube, the server performs three internal operations. If the storage mode for the cube is MOLAP, the server first reads the data from the relational data and stores it in a compact format. If there were aggregations defined for the cube, the server will build those aggregations during this processing. Finally, the server creates indexes for the data that helps speed access to data during querying. Even if there were no aggregations specified for the cube, the server still creates the indexes. The Process Data option actually is the first step of the Process Full option where the server reads data from relational data sources and stores it in proprietary format. The second and third steps of processing aggregations and indexes can be separately accomplished by the Process Index option. You might be wondering why you have the Process Data and Process Index options when the Process Full and Process Default options actually accomplish the same task. These options provide the administrator with a fine grain of control. These are especially important when you have limited time to access the relational data source and want to optimize the processing on your machine. Having multiple processing operations running in parallel can require more system resources. Specifically on a 32 – bit (X86 machines) system, a large cube that fails on Process Full may be able to be successfully processed by sending Process Data and Process Index commands one after another. In such instances, we recommend you first get the data from your relational backend into SSAS using the Process Data option. Once you have all the data in the Analysis Services instance, you can then create your aggregations and indexes, which do not need access to the relational data source.
If you choose the Process Structure option, the server processes all the cube ’ s dimensions and the cube definitions so that the cube ’ s structure is processed without any processing of the data. The server will not process the partitions or measure groups of the cube, therefore you cannot see any of the fact data; however, you can browse the cube because the cube definitions are processed. You can retrieve metadata information about the cube (measure names, measure groups, dimensions, KPIs, actions, and so on) after processing the cube ’ s structure. However, you will not be able to query the cube data. For a cube that has been processed with Process Structure, you can see the cube in the SQL Server Management Studio MDX query editor when you select the drop – down list for the cube. If your cube contains linked measure groups and if they have been processed successfully, processing the cube with the Process Structure option will allow you to query the measures in linked measure groups. Often when you design your UDM you will want to make sure your design is correct and your customers are able to see the right measures and dimensions. Process Structure is helpful in validating your design. As soon as the data for the cube is available the cube can be processed with the Process Default option so that end users can query the data from the cube.
You can clear the data in the cube using the Unprocess option. The processing options provided in the Process dialog are different than the process types that are specified in the process command sent to Analysis Services. The following table shows how the various processing options map to the process types sent to Analysis Services:
Process Options in Process Dialog Process Type in Process Command
Pr ocess Full
Process Structure Unprocess
Process Incremental Process Script Cache ProcessFull
The processed data of a cube are stored in a hierarchical directory structure that is equivalent to the structure you see in the Object Explorer. Figure 7 – 15 shows the directory structure of the processed data of the AnalysisServices2008Tutorial database in Analysis Services 2008. The directory also shows the files within a partition. The metadata information about the cubes and dimensions are stored as XML files, and the data is stored in a proprietary format. Every time an object is processed, a new version number is appended to the object. For example, the files shown in Figure 7 – 15 are under a specific partition directory. The file info. < versionnumber > .xml is used to store the metadata information about the partition. Similar metadata files are stored within the directories of each object, cube, dimension, and measure group. We recommend you browse through each object folder to see the metadata information. The fact data is stored in the file with extension .data. The key to an OLAP database is the fast access to data. You learned about a cell, which was represented by a tuple. A tuple is the intersection of various dimension members. For fast data access, Analysis Services builds indexes to access data across multiple dimensions. The index files in Analysis Services have the extension “ map ” . In Figure 7 – 15 you can see the .map files that have the format < version > . < Dimension > . < Hierarchy > .fact.map. There is an associated header file for each map file. Analysis Services stores the data as blocks called segments for fast access. The associated header file contains offsets to the various segments for fast access during queries.
The processing dialog provides you the flexibility of processing objects in parallel or within the same transaction. If errors are encountered during processing, you can set options to handle these errors. You can configure the parallelism and error options by selecting the Change Settings button in the Process dialog. You will see the Change Settings dialog as shown in Figure 7 – 16 , which enables you to configure certain processing options and error settings during processing. Setting the parallelism option is as simple as selecting the appropriate option in the Processing Order section of the dialog. By default all the objects are processed in parallel and within the same transaction. If you do want failure of one object to impact other objects, you should process the objects under different transactions by choosing the sequential option.
You might encounter errors while processing your Analysis Services objects due to incorrect design or referential integrity problems in the relational data source. For example, if you have a fact record that contains a dimension id that is not available in the dimension table, you will see a “ Key not found ” error while processing the cube. By default, when an error is encountered during processing, the processing operation fails. You can change the settings in the processing dialog to take appropriate action other than failing the processing operation. The Dimension Key Errors page of the Change Settings dialog shown in Figure 7 – 17 allows changing the error configuration settings for all the objects selected for processing. Whenever you encounter key errors you can either convert the values to unknown or discard the erroneous records. You can run into key errors while processing facts or dimensions. If you encounter a key error while processing a cube, that means Analysis Services was unable to find a corresponding key in the dimension. You can assign the fact value to a member called the Unknown Member for that specific dimension. You can encounter key errors while processing a snowflake dimension when an attribute defined as a foreign key does not exist in the foreign table or when there are duplicate entries. The two most common types of key errors that you might encounter during dimension processing are key not found and duplicate key errors.
Processing a Dimension
You can process dimensions independent of the cubes they are a part of. After the initial processing of a dimension, you might process the dimensions on a periodic basis if additional records are added in the dimension table or there were changes to columns of an existing row. An example of additions to a dimension is new products being added to the products dimension. You would want this information to be reflected in the dimensions so that you can see the sales information for the new products. Another example of changes in dimension is when an employee moves from one city to another city; the attributes of the employee will need to change. Therefore the Process dialog provides you with various options for processing the dimension, as shown in Figure 7 – 18 .
While processing a dimension, Analysis Services reads data from the dimensions tables. When a dimension is processed, each attribute of the dimension is processed separately. Based on the parallelism specified on Analysis Services, these attributes can be processed in parallel. Each dimension contains an attribute called the All attribute. This is not exposed to the user but used internally by Analysis Services. You can see the files associated with this attribute as < version > .(All). < extension > in Figure 7 – 19 . When each attribute is processed, several files are created. Similar to fact data, dimension data is stored in a proprietary format. Each attribute of a dimension has a key column and a named column. These directly map into two different files with the extensions kstore and sstore, which refer to key store and string store, respectively. In addition, there are additional files that get created for each attribute of the dimension, which help in fast access to name, key, and levels of attributes and hierarchies. Files with the extension .map are created when indexes are processed for each attribute and help in fast retrieval of related attributes of the dimension for a dimension member.
The amount of time it takes to process a dimension depends on the number of attributes and hierarchies in the dimension as well as the number of members in each hierarchy. When a processing command is sent to the Analysis Services instance, the server reads the data from the relational data source and updates the dimension. When a dimension is processed, each attribute of the dimension is processed separately. Some attributes can be processed in parallel, whereas some cannot. The order of processing of various attributes is dependent on the relationships between the attributes in the dimensions and resources available on the machine. The relationships between attributes are defined at the dimension design time using the Attribute Relationships tab of the dimension designer, which you learned about in Chapter 5 . For example, say you have a Customer dimension that contains the attributes Customer Name, SSN, City, State, and Country. Assume SSN is the Key attribute for this dimension and by default all attributes within the dimension are related to the key attribute. In addition, assume additional attribute relationships have been established. They are Country State, State City, City Customer Name, State Customer Name, and Country Customer Name. Based on the preceding relationships, the order of processing of the attributes in the Customer dimension is Country, State, City, Customer Name, and SSN. This is because Analysis Services needs to have information about Country in order to establish the member property relationship while processing the State, Customer Name, or SSN. The key attribute is always the last attribute processed within a dimension.
When the Process Default option is chosen for processing, the dimension ’ s data or indexes are processed if they have not been processed or are out-of-date. If the Process Full option is chosen, the entire dimension is re – processed. When the Process Full option is used, dimension data and indexes that have been processed initially will be dropped and data is retrieved from the data source. The dimension processing time depends on the dimension size (number of dimension members as well as number of attributes and hierarchies in the dimension) and your machine resources.
Similar to incremental processing of the cubes you can incrementally process dimensions using the Process Update option. The Process Update option in the Process dialog maps to the ProcessUpdate process type in the process command, which is applied only to dimensions. Some dimensions such as Employees or Customers or Products can potentially contain a large number of members. Additional members may have been added to these dimensions or some attributes of these dimension members might have changed. Often a full processing of any dimension is not only unnecessary but cannot be afforded due to business needs. Under these circumstances incremental processing of the dimension or an update of the attributes of the dimension should be sufficient. When you choose the Process Update option for the dimension, the server scans all the dimensions in the dimension table. If there were changes to the dimension ’ s properties, such as caption or description, they are updated. If new members are added to the dimension table, these members are added to the existing dimension using incremental processing. The attributes of each dimension member will also be updated. The key of each dimension member is assumed to be the same, but expect some attributes to be updated. The most important attribute that is updated is the member property for each member. When you have a parent – child hierarchy in a dimension and if the parent attribute has been changed, that information is updated during the Process Update processing option.
The Process Data option for dimensions is used to process the dimension data. The indexes will not be processed when the Process Data option is used. The Process Index option is used to create indexes for attributes in the dimensions. If the ProcessMode dimension property is set to LazyAggregations, Analysis Services builds indexes for new attributes of the dimension as a lazy operation in the background thread. If you want to rebuild these indexes immediately you can do so by choosing the Process Index option. The Unprocess option is used to clear the data within the dimension.
Partitions enable you to distribute fact data within Analysis Services and aggregate data so that the resources on a machine can be efficiently utilized. When there are multiple partitions on the same server, you will reap the benefits of partitions because Analysis Services reads/writes data in parallel across multiple partitions. Fact data on the data source can be stored as several fact tables — Sales_Fact_2002, Sales_Fact_2003, and so on — or as a single large fact table called Sales Fact. You can create multiple partitions within a measure group; one for each fact table in the data source or by splitting data from a single large fact table through several queries. Partitions also allow you to split the data across two or more machines running Analysis Services, which are called Remote partitions. As an administrator you might be thinking what the size of each partition should be to achieve the best results. Microsoft recommends each partition to be 3 – 5GB or 20 million records. You learn more about optimizing partitions in Chapter 14 .
A sales cube ’ s partitions usually contain data spread across time, that is, a new partition might be created for every month or a quarter. As an administrator you would create a new partition from SQL Server Management Studio and process it so that it is available for users. To create a new partition, perform the following steps in BIDS:
1. Open the AnalysisServices2008Tutorial project you have used in previous chapters. 2. Change the FactInternetSales table to a named query so that there is a where condition DueDateKey < 20020101. In case you don ’ t recall how this is done, we ’ ve included the steps here: a. Open Adventure Works DW.dsv under the Data Source Views folder.
b. Right – click the FactInternetSales table in diagram view and select Replace Table With New Named Query menu item.
c. In the Create Named Query dialog, in the DueDateKey Filter text entry box, enter < 20020101. Your change will automatically be reflected in the query window as shown in Figure 7 – 20 . Click OK to continue.
3. In the DSV, right – click in the diagram view and select Add/Remove Tables from the context menu.
4. Add the FactInternetSales table to “ Included objects: ” list and click OK.
5. In the diagram view, replace the FactInternetSales table with a named query.
6. In the named query, set Filter to DueDateKey > =20020101.
7. Rename the named query as FactInternetSalesNew.
8. Deploy the AnalysisServices2008Tutorial project to your Analysis Services instance.
9. Connect to the AnalysisServices2008Tutorial database using SSMS.
10. Navigate to the measure group FactInternetSales.
11. Right – click the Partitions folder and select New Partition as shown in Figure 7 – 21 .
12. Click Next on the welcome screen of the Partition Wizard.
13. Choose the named query FactInternetSalesNew to create a new partition as shown in Figure 7 – 22 and click Next. Select the checkbox “ Specify a query to restrict rows ” . As suggested by the warning in the Restrict Rows page (Figure 7 – 23 ) you may need to specify a restriction on the query to filter appropriate data for a partition. In this example FactInternetSalesNew already has the appropriate query restriction.
14. Click the Next button.
15. One way Analysis Services provides scalability is by use of remote partitions, where the partitions reside in two or more Analysis Services instances. On the Processing and Storage Locations page, as shown in Figure 7 – 24 , you can specify where to store the partition. You can specify the remote Analysis Services instance on this page, but the data source to the remote Analysis Services instance should have been defined in this database. You can also change the storage location where you want the data for the partition to reside on any of the Analysis Services instances. Choose the default options as shown in Figure 7 – 24 and click Next.
16. In the final page of the Partition Wizard, select Design aggregations later, Process Now as shown in Figure 7 – 25 and click Finish.
17. In the Process Partition dialog, click OK to process the FactInternetSalesNew partition.
The partition will be processed and you can browse the cube data. The number of partitions for a specific cube typically increases over time. Users might not be browsing historical data with the same granularity as that of the recent data. For example, you might be more interested in comparing Sales data for the current month to that of the previous month rather than data from five years ago. However, you might want to compare year- over- year data for several years. By merging the partition data you can see some benefits during query performance. You learn about the considerations you should take into account to merge partitions in Chapter 14 .
There are two main requirements to merge partitions: The partitions should be of the same storage type, and they need to be on the same Analysis Services instance. Therefore if you have remote partitions, they can be merged together only if they are on the same Analysis Services instance. To merge partitions, do the following:
1. Launch the Merge Partition dialog by right – clicking the Partitions folder under the Fact Internet Sales measure group.
2. In the Merge Partition dialog shown in Figure 7 – 26 , select the Target partition that will contain the merged data and the list of partitions to merge data and click OK.
All the data from the source partitions will merge into the target partition, and the source partitions are deleted due to this operation. SSMS sends the following command to Analysis Services to merge the partitions:
< MergePartitions xmlns=”http://schemas.microsoft.com/analysisservices/ 2003/engine” >
< Sources >
< Source >
< DatabaseID > AnalysisServices2008Tutorial < /DatabaseID >
< CubeID > Adventure Works DW < /CubeID >
< MeasureGroupID > Fact Internet Sales < /MeasureGroupID >
< PartitionID > Fact Internet Sales < /PartitionID >
< /Source >
< /Sources >
< Target >
< DatabaseID > AnalysisServices2008Tutorial < /DatabaseID >
< CubeID > Adventure Works DW < /CubeID >
< MeasureGroupID > Fact Internet Sales < /MeasureGroupID >
< PartitionID > Fact Internet Sales New < /PartitionID >
< /Target >
< /MergePartitions >
Assemblies, also called stored procedures, help you in performing specific tasks on the Analysis Services database or across the server. For example, Analysis Services has four assemblies installed that provide you with the functionality of calling Excel or VBA functions within your MDX queries. The System Assembly is used for operations such as Backup or Restore in retrieving information such as folders containing Analysis Services backup files, as well as supporting data mining algorithm requests. Analysis Services 2008 supports two types of assemblies: COM user- defined functions (UDFs) and .NET assemblies. COM UDFs are primarily supported for backwards compatibility with Analysis Services 2000. You learn about .NET and COM assemblies and how to build and deploy them in Chapter 11 . In this section you learn about managing assemblies on your Analysis Services instance.
Assemblies can be added only by Analysis Services administrators. You need to make sure your instance of Analysis Services is safe and secure irrespective of the operations done by the stored procedures. Security is always a concern, and you do not want any assemblies to bring down the server. Because hackers try to hack servers, most software products now are built to be secure by default. The administrator needs to enable certain components and options to make them available to users. By default, Analysis Services does not allow execution of stored procedures. The administrator first needs to enable the server property Feature\ComUdfEnabled to true (value of 1 in the Analysis Services config file) for enabling COM UDFs. This is accomplished using the Analysis Server Properties dialog.
The key to managing assemblies is to understand the nature of the assembly and setting appropriate properties while adding assemblies to your Analysis Services server. Figure 7 – 27 shows the dialog used to add assemblies to the server or to a specific database. This dialog can be launched by right – clicking the Assemblies folder under a specific database and choosing New Assembly.
Analysis Services supports two types of assemblies: COM and .NET CLR assemblies. Once you specify the type and name of the assemblies in the Register Assembly dialog, you need to specify the security information for these assemblies. Two parameters control the security of these stored procedures: Impersonation and Permissions. Permissions allow you to define the scope of access for the assembly, such as accessing the file system, accessing the network, and accessing unmanaged code. There are three different values for permissions. They are:
❑ Safe: The most secure of the three permissions. When the Safe permission set is specified for an assembly, it means that the assembly is intended only for computation and the assembly cannot access any protected resource. It guarantees protection against information leaks and elevation attacks by malicious code.
❑ External access: This permission value allows access to external resources by the assembly without compromising reliability, but does not offer any specific security guarantees. You can use this if you as the DBA trust the programmer’ s ability to write good code and if there is a need to access external resources such as data from an external file.
❑ Unrestricted: This set value is primarily intended for people who have a very good understanding of programming on servers and need access to all resources. This permission set does not guarantee any code security or reliability. Unrestricted access should only be allowed to assemblies that have been written by users who absolutely need access to external resources and have a very good understanding of all security issues, such as denial of service attacks and information leakage, and are able to handle all these within the stored procedures. We recommend you use this option only when it is absolutely essential and you have full confidence in the programming abilities of the developer who has developed the assembly.
All COM DLLs will have the Permissions parameter set to Unrestricted. The Impersonation parameter allows you to specify the account under which the stored procedure will be executed. There are five different values for Impersonation:
❑ Default: The Default value allows you to execute the stored procedure under a secure mode with the minimum privileges. If the assembly is of type COM the default value is “ Use the credentials of the current user. ” For a .NET assembly, the default value depends on the permission set defined. If the permission set is Safe, the Impersonation mode will be Impersonate Service Account, but if the permission set is External Access or Unrestricted, the Impersonation mode will be Impersonate Current User.
❑ Anonymous: If you want the stored procedure to be executed as an anonymous user, you need to select Impersonate Anonymous. You will have limited access when the stored procedure is executed under this setting.
❑ Use the credentials of the current user: This impersonation mode is typically used when you want the stored procedure to be executed with the user’ s credentials. This is a safe option to select. If the stored procedure accesses external resources and the current user executing the stored procedure does not have permissions, execution of the stored procedure will not cause any ill effects. A use of this impersonation mode is to define dynamic data security where the current user’ s credential is needed to access external resources.
❑ Use the service account: If you choose to use the service account, whenever the stored procedure is executed it will be executed under the credentials of service startup account for Analysis Services. An example of a stored procedure that would need this impersonation mode is an AMO stored procedure that does management operations on the server.
❑ Use a specific Windows username and password: If your business needs a stored procedure to always be executed in the context of a specific user, you need to choose this option. You need to specify a Windows account name and password for this impersonation mode. A typical example where you might use this option is when you access an external data source or web service to retrieve data with this account and utilize that value within the stored procedure for computation. If you choose this option, you will need to make sure you update the password on the account when there is a password change.
We recommend that COM assemblies use the credentials of the current user impersonation, whereas for .NET CLR assemblies you should use the appropriate impersonation mode based on your customer scenario. As an administrator of Analysis Services, you need to choose the impersonation and permission setting that suits your business needs and does not compromise the security of your Analysis Services instance.
When you register an assembly with a specific Analysis Services database or for the server using the Register Assembly dialog, AMO will be used to set up the correct properties. This, in turn, sends a Create command to the Analysis Services instance as shown here:
< Create AllowOverwrite=”true” xmlns=”http://schemas.microsoft.com/
< ParentObject >
< DatabaseID > AnalysisServices2008Tutorial < /DatabaseID >
< /ParentObject >
< ObjectDefinition >
< Assembly xmlns:xsd=”http://www.w3.org/2001/XMLSchema”
xmlns:ddl2=”http://schemas.microsoft.com/analysisservices/2003/ engine/2” xmlns:ddl2_2=”http://schemas.microsoft.com/analysisservices/ 2003/engine/2/2” xmlns:ddl100_100=”http://schemas.microsoft.com/ analysisservices/2008/engine/100/100” xsi:type=”ClrAssembly” > < ID > AmoSproc < /ID >
< Name > AmoSproc < /Name >
< Description / >
< ImpersonationInfo >
< ImpersonationMode > Default < /ImpersonationMode >
< /ImpersonationInfo >
< Files >
< File >
< Name > AmoSproc.dll < /Name >
< Type > Main < /Type >
< Data >
< Block > ————Content about the stored procedure—–< /Block >
< Block > ————Content about the stored procedure—–< /Block >
< Block > ————Content about the stored procedure—–< /Block >
< Block > ————Content about the stored procedure—–< /Block >
< /Data >
< /File >
< /Files >
< PermissionSet > Safe < /PermissionSet >
< /Assembly >
< /ObjectDefinition >
< /Create >
The information within the BLOCK tag is a large amount of text content, which for illustration purposes has been restricted to a single line. This text within the BLOCK tag is the assembly to be registered that will be stored within the Analysis Services instance. When queries use functions within the assembly, Analysis Services loads the assembly within the same process and executes the CLR assembly with appropriate parameter passing. The results from the assembly are appropriately passed back to Analysis Services for further evaluation of a query.
Backup and Restore
Backup is an operation that is part of every individual ’ s life. If you have an important document, you make a photocopy as a backup. Similarly, backup is an extremely critical operation for any data warehouse. There are several reasons why you should periodically back up your Analysis Services database. One reason is for disaster recovery; another is for auditing purposes. Irrespective of purpose, it is always a good idea to back up your database on a periodic basis. You can back up databases on your Analysis Services instance through SSMS. Follow these steps to back up the AnalysisServices2008Tutorial database:
1. Connect to the Analysis Services instance using SSMS.
2. Navigate to the database AnalysisServices2008Tutorial in the Object Explorer window.
3. Right – click the database and select Back Up.
You will see the Backup dialog shown in Figure 7 – 28 . By default the dialog chooses the database name as the backup name. By default the backup file will be created in the Backup folder of your Analysis Services installation. If you want the backup to be stored in a location on a different drive or directory, you first need to change the Analysis Services server property AllowedBrowsingFolder by adding the appropriate directory. You can then choose the folder by clicking Browse in the Backup Database dialog.
You have the option to encrypt the database and specify a password. You ’ ll need that password to restore the database. If you have remote partitions in the database, you have the option of specifying the backup location for each remote partition. Backup of these partitions is done on respective Analysis Services instances on that machine.
4. Disable the option to Encrypt the backup file.
5. Select the option “ Allow file overwrite ” to overwrite any existing backup files with the same name.
6. Choose the default backup file name and click OK.
The following command is sent to the Analysis Services instance by SSMS to back up the database AnalysisServices2008Tutorial:
< Backup xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” > < Object >
< DatabaseID > AnalysisServices2008Tutorial < /DatabaseID >
< /Object >
< File > AnalysisServices2008Tutorial.abf < /File >
< AllowOverwrite > true < /AllowOverwrite >
< /Backup >
If you have specified a password, an Analysis Services 2008 backup file with the extension .abf will be created in the Backup folder. Backing up Analysis Services 2005 databases of sizes greater than 10GB used to take a long time. Analysis Services 2008 has made specific enhancements to backup performance intended to result in shorter backup times for databases of any size. Analysis Services 2008 also allows you to back up multiple databases at the same time. Through the SQL Server Management Studio you can launch the backup command from multiple databases and run backups in parallel. Alternatively, you can create a DDL that will execute backup of multiple databases within the same command.
Whenever you want to restore an Analysis Services database for which you have a backup, you can do so using the Restore Database dialog. Follow these steps to restore the AnalysisServices2008Tutorial backup:
1. In SSMS Object Explorer, right – click the AnalysisServices2008Tutorial and select Delete.
2. In the Delete Objects dialog shown in Figure 7 – 29 , click OK.
3. In the SSMS Object Explorer, right – click the Databases folder as shown in Figure 7 – 30 and select Restore.
4. In the Restore Database dialog (see Figure 7 – 31 ) click the Browse button next to Backup File.
5. In the Locate Database Files dialog, navigate to the Backup folder and select the AnalysisServices2008Tutorial.abf file as shown in Figure 7 – 32 and click OK. Figure 7-32
6. Type AnalysisServices2008Tutorial in the combo box next to Restore Database and click OK. SSMS now sends the following XMLA command to restore the database on your Analysis Services instance:
< Restore xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” > < File > C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Backup\ AnalysisServices2008Tutorial.abf < /File >
< DatabaseName > AnalysisServices2008Tutorial < /DatabaseName >
< /Restore >
If you refresh the list of databases on your Analysis Services instance, you should now see the AnalysisServices2008Tutorial database in the SSMS Object Explorer. If a database with the same name and ID exists on your Analysis Services instance, you can restore the newer database by clicking the Allow Database Overwrite checkbox in the Restore dialog.
Once the database has been restored you can query the database. You can take a backup of a database from your test servers and restore it on your production server. In such a circumstance you might choose to skip the security information if the security defined on production servers is different from those on your test servers. In such a circumstance you would need to ensure you secure the database by defining the right security on production servers. In a circumstance where the backup was taken on your production server and you are restoring the database on an upgraded production machine we do expect users to restore the database with the security information.
Detach and Attach
Analysis Services provides you the functionality to detach and attach a complete database from an Analysis Services instance. These detach and attach commands differ from backup and restore commands. The attach operation allows you to mark a specific database read – only, and the database ’ s data files do not have to be stored in the default Data folder path of your Analysis Services instance. The read – only feature allows you to have a shared scalable architecture of Analysis Services for situations where you have a need to scale out the server to multiple users who are querying a specific Analysis Services database.
Follow these steps to detach the AnalysisServices2008Tutorial database:
1. In SSMS right – click the AnalysisServices2008Tutorial database and select Detach as shown in Figure 7 – 33 .
2. In the Detach Database dialog shown in Figure 7 – 34 , click OK. Figure 7-34
SSMS sends the following XMLA command to the Analysis Services instance:
< Detach xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” > < Object >
< DatabaseID > AnalysisServices2008Tutorial < /DatabaseID >
< /Object >
< /Detach >
After receiving the detach command, Analysis Services first takes a write lock on the database to be detached. Taking a write lock means all existing DDL operations must complete before the detach command is started. The Analysis Services instance creates a detach log file that contains the version information, the key used for encrypting the database (if specified), and a few additional pieces of information about the database with the name AnalysisServices2008Tutorial.detach_log. This log file is created within the database folder as shown in Figure 7 – 35 . Analysis Services then commits and deletes the database. The entire database folder is now independent and can be copied and attached to another Analysis Services instance.
You can now attach the detached database to your Analysis Services instance. Follow these steps to attach the database in read – only mode:
1. Move the AnalysisServices2008Tutorial database folder that was detached from its original location under %Program Files%\Microsoft SQL Server\MSAS10.SQLServer\OLAP\ Data to %Program Files%\Microsoft SQL Server\MSAS10.SQLServer\OLAP.
2. If prompted by the operating system to provide administrative privileges to move the folder, provide the permissions.
3. In the SSMS Object Explorer, right – click the Databases folder and select Attach as shown in Figure 7 – 36 .
4. In the Attach Database dialog, specify the full path of the AnalysisServices2008Tutorial database as shown in Figure 7 – 37 .
5. Enable the checkbox next to Read – only and click OK.
6. Refresh the Databases folder in the SSMS Object Explorer.
You will now see that the AnalysisServices2008Tutorial database has been attached to the Analysis Services instance. Because you attached the database as read – only, you will notice that this database has been marked in gray in the SSMS Object Explorer as shown in Figure 7 – 38 . You can also confirm that the database is read – only by right – clicking the database and selecting Properties. You will see the Read – Write Mode property set to ReadOnly in the Database Properties dialog. The read – only database feature in Analysis Services helps in having a shared scalable database architecture where you can have a single database folder on a Storage Area Network (SAN) attached to multiple Analysis Services instances. You learn how this is helpful in query performance in Chapter 15 .
Synchronization sounds like a sophisticated, highly technical area of endeavor, but actually, it couldn ’ t be simpler; consider synchronization as just replication for Analysis Services 2008 databases. The name actually is suitable because it allows you to “ synchronize ” the Analysis Services database resident on one Analysis Services instance to the same database on another Analysis Services instance. Typically we expect Analysis Services DBAs to test the designed Analysis Services database in a test environment before they move them to their production servers. The DBAs often have to back up their database on test servers and restore them on production servers. However, through the synchronization feature in Analysis Services 2008 one can move well – tested databases from test servers to production servers with ease.
If you have an Analysis Services instance actively supporting a population of users, you want to be able to update the database they ’ re querying against without taking the system down to do so. Using the Synchronize Database Wizard, you can accomplish the database update seamlessly. The wizard will copy both data and metadata from your development and test machine (staging server) to the production server and automatically switch users to the newly copied data and metadata based on conditions defined on production server. To try this out, you need to have two instances of Analysis Services installed on another machine, or have a second instance of Analysis Services installed on your current machine. We recommend you install another instance called SS2008 on the same machine. Follow these steps to synchronize a database from the default instance to the new named instance SS2008:
1. Launch SSMS and connect to your default instance (localhost) and named instance (localhost\ SS2008) of Analysis Services as shown in Figure 7 – 39 .
2. Right – click the Databases folder of the named instance and select Synchronize as shown in Figure 7 – 39 .
3. If you see the welcome screen click Next.
4. In the Select Database to Synchronize page of the Synchronize Database Wizard, type the default instance localhost as the Source server and select the Source database AnalysisServices2008Tutorial as shown in Figure 7 – 40 and click Next. In the Specify Locations for Local Partitions page you can change locations of the partitions during synchronizations if the destination server allows it. In Figure 7 – 41 you can see that all the partitions of AnalysisServices2008Tutorial will be restored in the default location.
5. Click Next in the Specify Locations for Local Partitions page. On the Synchronization Options page you can specify the level of security information to copy when you synchronize as shown in Figure 7 – 42 . You can choose to copy all the roles and members, skip the membership information for the roles, or skip all the security information. Analysis Services 2008 has been designed to provide these options because customers might choose to synchronize databases from test servers to production servers. While synchronizing databases from test to production servers you can choose to keep all roles if the security permissions in the test environment are identical to the ones in the production environment. If the security permissions have been defined in such a way that they can be utilized in the production environment but the users in the production environment are different, you can use the Ignore all option. If you choose the Ignore all option you will need to define membership after synchronization.
6. Select the Skip membership option as shown in Figure 7 – 42 and click Next. In the Select Synchronization Method page you can choose to start the synchronization process immediately or script the command to a file and later send the command to the destination server using SSMS or through other programs.
7. Select the Synchronize now method as shown in Figure 7 – 43 and click Next. Figure 7-43
8. Review the synchronization options you have selected in the Completing the Wizard page and click Finish.
SSMS sends the following XMLA command to the Analysis Services SS2008 instance to synchronize the AnalysisServices2008Tutorial database from the SQLServer2008 instance:
< Synchronize xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”
< Source >
< ConnectionString > Provider=MSOLAP.4;Data Source=localhost\SQLServer2008; ConnectTo=10.0;Integrated Security=SSPI;
Initial Catalog=AnalysisServices2008Tutorial < /ConnectionString >
< Object >
< DatabaseID > AnalysisServices2008Tutorial < /DatabaseID >
< /Object >
< /Source >
< SynchronizeSecurity > SkipMembership < /SynchronizeSecurity >
< ApplyCompression > true < /ApplyCompression >
< /Synchronize >
< PropertyList xmlns=”urn:schemas-microsoft-com:xml-analysis” >
< LocaleIdentifier > 1033 < /LocaleIdentifier >
< /PropertyList >
You should be aware that the destination server contacts the source server for synchronization using the credentials of the service startup account and not the user who initiated the synchronize operation from SSMS. You do need to make sure the service startup account of the destination server has credentials to access the databases on the source server. The source server creates a backup of the objects that have changed in the source server, compresses them, and then sends them to the destination server. On the destination server these objects are first restored under a temporary name. If there are active queries being executed against the database on the destination server, the server waits for those queries to complete and then updates the objects. On the source server, the objects are locked during synchronization. Until the time the objects are sent to the destination server you cannot perform operations, such as processing, or other actions that will modify the objects.
9. You will see the progress of the synchronization operation in the Database Synchronization Progress dialog as shown in Figure 7 – 44 . You will see the progress percentage of the synchronization shown on this page, which gets updated periodically. After the synchronization completes, you will see the message in the page as shown in Figure 7 – 44 . Click Close.
You can use the synchronization wizard periodically to synchronize the database from source server to destination server. We typically expect the source server to be your test environment and the destination server to be the production environment. Synchronization is a pull model operation where the destination server pulls data from the source server. If a version of the database exists on the destination server, the source server only sends the data. Typically on the destination server you might have established new membership or security permissions. If you choose appropriate options to skip membership or ignore roles during synchronization, security permissions on the destination servers will not be changed.
There is an important security requirement you must implement to complete a s uccessful synchronization. The destination server’s service startup account must have access to the databases on the source server that are expected to be synchronized.
As an administrator, managing security is the most critical operation for the Analysis Services database. The phrase “ managing security ” can mean several things: managing the roles of Analysis Services databases, using the Analysis Services security features dimension and cell security, enabling and disabling features in Analysis Services, and setting up Analysis Services with appropriate firewall protection. The latter will ensure your Analysis Services instance can appropriately communicate via Internet and intranet.
Server and Database Roles
Roles are of vital importance to securing databases on your Analysis Services instance. You will deal with two kinds of roles when using the product: the server role and database roles. The server role is required for use by a login that performs administrative functions through the user interface (SSMS) or programmatically using AMO. The database roles are defined on an as – needed basis. You can provide read/write permissions to users for all objects in the database or as fine grain as certain cells in a cube. You learned about the server role and how to specify membership earlier in this chapter. In Chapters 9 and 22 you learn to define read/write access for dimensions and cubes in a database.
The Create Role dialog, accessed by right – clicking the Roles folder under a database node in Object Explorer, has the ability to provide full database access to a user as seen earlier in this chapter. In addition, Roles objects in a database help the DBA define fine – grain access to various database objects. The pages in the Create Role dialog are identical to the Role designer, which you learn more about in Chapter 22 . Please see Chapter 9 for more details on granting dimension and cube access to Analysis Services users. Chapter 22 provides extensive coverage of database role management through a scenario that shows how to restrict access to specific dimension members (dimension security) or cells in a cube (cell security).
To recap briefly, you as the database administrator can add roles to a specific database, add members to a role, and provide read, write, or read/write access to a role. In addition, you can specify the cell security and dimension security for this role using MDX expressions to limit access to specific cell data or dimension members. When a user is part of multiple roles, Analysis Services provides you access to data in a least restrictive manner. If a user has been restricted access to members of a dimension in one role and has been provided access to the same members in another role, the user will be able to access the members.
Enabling or Disabling Features
Managing database roles is one aspect of securing data in Analysis Services. You can add users to the server role so that you can have several administrators for your Analysis Services instance. Administrators can define appropriate levels of access to databases and objects within a database. However, there is another level of protection that Analysis Services 2008 provides. You can disable features that are not used by your users. One of the most common ways to protect your server from security attacks is to reduce your attack surface by running your server or application with minimum functionality. For example, you can turn off unused services of an operating system that listens for requests from users by default. As and when features are needed, they can be enabled by an administrator. Similarly, Analysis Services allows you to enable or disable certain features to prevent security attacks, thereby making your Analysis Services installation more secure. The following is the list of server properties that can be used to enable or disable certain features or services of your Analysis Services instance:
The properties LinkInsideInstanceEnabled, LinkToOtherInstanceEnabled, and
LinkFromOtherInstanceEnabled enable or disable linked objects (measure groups and dimensions) within the same instance and between instances of Analysis Services. The properties ManagedCodeEnabled and COMUDFEnabled allow/disallow loading assemblies to Analysis Services. You can allow or deny ad – hoc open row set data mining queries using the property Datamining\ AllowAdhocOpenRowSetQueries. The server property Security\RequireClientAuthentication allows or denies anonymous connections to Analysis Services. You can force clients to connect using encryption using the ConnStringEncryptionEnabled property. You can change these properties using the properties dialog for an Analysis Services instance.
As an administrator, you need to ensure that databases and their objects are kept up to date. Otherwise, your end users will query out – of – date information. After creating an OLAP database in SSMS you can use the Create Role dialog to create roles and provide permissions to specific users that allow them to be database administrators. To reduce confusion, deployed Analysis Services projects should be named the same as the database created by the server administrator. Once the database is created, the server or the database administrator can perform administrative operations using SSMS such as adding new roles, adding assemblies, or processing objects periodically. There might be certain design changes that you will have to make to the database based on additional requirements from the end users. In such a circumstance you would not be able to make the changes in the original project and deploy that project to the Analysis Services instance because the new objects added to the database will likely be deleted. Analysis Services 2008 provides you two ways to make additional changes. You can connect to the Analysis Services database on the server directly through BIDS and then make the changes in a mode called “ online mode. ” The second option is to import the database into your Analysis Services instance using the Import Analysis Services 2008 Database project (one of the Business Intelligence Project templates that can be used from BIDS), make changes to the database in project mode, and then re – deploy the project to the Analysis Services instance. We recommend the former because you can not only make changes directly on the server, which updates the objects immediately, but you can also perform processing tasks on objects. Instead of designing your Analysis Services database in project mode and then deploying it to the Analysis Services instance you can design the entire database by connecting to the database in online mode. Follow these steps to connect to an Analysis Services database in online mode using BIDS:
1. Launch BIDS from All Programs Microsoft SQL Server 2008 SQL Server Business Intelligence Development Studio.
2. To open an Analysis Services database in online mode select File Open Analysis Services Database as shown in Figure 7 – 45 . In the Connect To Database dialog you have the choice of opening an existing database or creating a new database on the Analysis Services instance. You need to be a server administrator to create a database on the Analysis Services instance.
3. Type the Analysis Services instance name and select a database to open in online mode as shown in Figure 7 – 46 and click OK.
BIDS connects to the Analysis Services instance and retrieves all the database objects. You will see all the objects of the database in the Solution Explorer similar to the project mode as shown in Figure 7 – 47 . Notice that with the database connected in online mode the Analysis Services instance name is indicated next to the database name in the Solution Explorer. All the operations that you were able to perform on the objects within a database in the project mode can be performed in the online mode. You do not have the deployment option for the database. Instead, you save all your changes directly to the server. We recommend that you explore making changes or adding new objects in the online mode and then saving the changes directly on the Analysis Services instance.
In this chapter you learned to administer SQL Server Analysis Services 2008 using SSMS. You learned the various management operations that can be accomplished on the Analysis Services databases. If you ’ re a database administrator experienced with administering Analysis Services 2000, this chapter may have been largely about getting used to the new management console SSMS. If you have been a database administrator with Analysis Services 2005, you will notice the most important administration feature is being able to detach and attach databases. You have seen that SSMS provides the ability to manage your Analysis Services instances effectively. In addition to learning the various management operations from SSMS, you also learned what XMLA commands are sent by SSMS to the Analysis Services instance, and you have a better understanding of the interactions between SSMS and an Analysis Services instance.
In this chapter you learned that for each administrative task there is a way to execute it through the user interface using SSMS or BIDS. You learn to perform Analysis Services management operations programmatically using the AMO (Analysis Services Management Object) object model in Chapter 13 .
You learned how useful backup and restore can be for auditing purposes — specifically by providing snapshots in time of cubes to auditors and analysts. And how security is configured for administrative versus user access to the server; this by using server and database roles, respectively. Finally, in this chapter you were exposed to properties. In the next chapter you learn some of the advanced topics in creating dimensions.