Quantcast
Channel: SQL Backup Archives - SQL Authority with Pinal Dave
Viewing all 110 articles
Browse latest View live

SQL SERVER – Msg 3292: A Failure Occurred While Attempting to Execute Backup or Restore With a URL Device Specified

$
0
0

SQL SERVER - Msg 3292: A Failure Occurred While Attempting to Execute Backup or Restore With a URL Device Specified errorbackup In my recent project with a customer, they wanted to configure SQL backups to Azure Blob Storage. When we were trying to take a backup, we were facing error. In this blog we would learn how to fix error message 3292 – A failure occurred while attempting to execute Backup or Restore with a URL device specified.

The command which we tried was below

BACKUP DATABASE master TO URL = 'https://sqldbprodbackups.blob.core.windows.net/daily/master.bak'
WITH CREDENTIAL = 'BackupCredential'
GO

Here was the error which we were seeing.

Msg 3292, Level 16, State 6, Line 1
A failure occurred while attempting to execute Backup or Restore with a URL device specified. Consult the operating system error log for details.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

We have checked and verified that same command works from another server and backup works fine. That means the issue was not with the storage account. I found below MSDN link to troubleshoot the issue.

SQL Server Backup to URL Best Practices and Troubleshooting

As per above, I enabled trace flag 3051 to get more detailed messages.

DBCC TRACEON (3051,3605,-1);
GO

After this, I ran the backup command again and here is the information I received in the ERRORLOG file

2018-07-04 20:52:20.83 spid65 DBCC TRACEON 3051, server process ID (SPID) 65. This is an informational message only; no user action is required.
2018-07-04 20:52:20.83 spid65 DBCC TRACEON 3605, server process ID (SPID) 65. This is an informational message only; no user action is required.
2018-07-04 20:52:23.37 spid65 VDI: “C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Binn\BackupToUrl.exe” “b” “p” “xxxx” “yyyy” “zzzz” “NOFORMAT” “4D005300530051004C00530045005200560045005200” “C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Log” “DB” “6D0061007300740065007200” “TRACE”
2018-07-04 20:52:23.37 spid65 BackupToUrl: couldn’t load process Error Code: 80070002
2018-07-04 20:52:23.37 Backup Error: 3041, Severity: 16, State: 1.
2018-07-04 20:52:23.37 Backup BACKUP failed to complete the command BACKUP DATABASE master. Check the backup application log for detailed messages.

I have truncated messages to fit into the blog, instead of xxx, yyy and zzz there were long strings. Did we see anything interesting? Below is an interesting message.

BackupToUrl: couldn’t load process Error Code:  80070002

WORKAROUND/SOLUTION

You can use my earlier blog to convert above code to a meaningful and human-readable error message.

How to Convert Hex Windows Error Codes to the Meaningful Error Message – 0x80040002 and 0x80040005 and others?

As per code, it means “The system cannot find the file specified.”. When I checked BINN folder we found that someone renamed BackupToUrl.exe to BackupToUrl.exe.dll

Once we renamed the file back to original name, the backup started working fine.

Here is the command to turn off the trace flag.

DBCC TRACEOFF (3051,3605,-1);
GO 

Have you used any other trace flag for troubleshooting backup/restore issue?

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Msg 3292: A Failure Occurred While Attempting to Execute Backup or Restore With a URL Device Specified


SQL SERVER – Backup to URL – Script to Generate Credential and Backup using Shared Access Signature (SAS)

$
0
0

As I mentioned in my earlier blog, backup to URL is one of the common methods used in SQL Server performs a backup to Azure Blob Storage. In this blog, I am going to share a script to generate the create credential and backup command using Shared Access Signature also called as SAS token.

If you don’t know already, Backup to URL also has two methods to connect to the storage account

  1. Credential by using Access Keys.
  2. Credential by using SAS token.

In my earlier blog, I have shared script to use the first method. SQL SERVER – Msg 3292: A Failure Occurred While Attempting to Execute Backup or Restore With a URL Device Specified

In this blog, I would show the second method – Backup using Shared Access Signature.

WORKAROUND/SOLUTION

In the script, we need to provide below parameters.

  1. @StorageAccountName: In Azure portal, go to “Home” > “Storage accounts” and pick up the account which you want to use. In my demo, its “sqldbprodbackups”.
  2. @ContainerName: To get a container name, you can refer below screenshot. You need to click on “Browse Blobs”. If you don’t have container created already then click on “+” symbol and create a new one. In my Azure, I have already created one called “dailybackups” as shown below. You can also see @StorageAccountName on the same page.

SQL SERVER - Backup to URL - Script to Generate Credential and Backup using Shared Access Signature (SAS) Backup-SAS-Script-01

  1. @SASKey: Refer below steps for SAS Key generation.

We need to click on “Shared access signature” as shown below.

SQL SERVER - Backup to URL - Script to Generate Credential and Backup using Shared Access Signature (SAS) Backup-SAS-Script-02

Then we need to click on “Generate SAS and connection string” button. Once done, scroll down and we should see something like below.

SQL SERVER - Backup to URL - Script to Generate Credential and Backup using Shared Access Signature (SAS) Backup-SAS-Script-03

The value should be assigned to variable @SASKey

---- Backup To URL (using SAS Token) :
--- =================================== --- 
DECLARE @Date AS NVARCHAR(25)
	,@TSQL AS NVARCHAR(MAX)
	,@ContainerName AS NVARCHAR(MAX)
	,@StorageAccountName AS VARCHAR(MAX)
	,@SASKey AS VARCHAR(MAX)
	,@DatabaseName AS SYSNAME;
SELECT @Date = REPLACE(REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR, GETDATE(), 100), '  ', '_'), ' ', '_'), '-', '_'), ':', '_');
SELECT @StorageAccountName = ''; --- Find this from Azure Portal
SELECT @ContainerName = ''; --- Find this from Azure Portal
SELECT @SASKey = ''; --- Find this from Azure Portal
SELECT @DatabaseName = 'master';
IF NOT EXISTS (
		SELECT *
		FROM sys.credentials
		WHERE name = '''https://' + @StorageAccountName + '.blob.core.windows.net/' + @ContainerName + ''''
		)
BEGIN
	SELECT @TSQL = 'CREATE CREDENTIAL [https://' + @StorageAccountName + '.blob.core.windows.net/' + @ContainerName + '] WITH IDENTITY = ''SHARED ACCESS SIGNATURE'', SECRET = ''' + REPLACE(@SASKey, '?sv=', 'sv=') + ''';'
	--SELECT @TSQL
	EXEC (@TSQL)
END
SELECT @TSQL = 'BACKUP DATABASE [' + @DatabaseName + '] TO '
SELECT @TSQL += 'URL = N''https://' + @StorageAccountName + '.blob.core.windows.net/' + @ContainerName + '/' + @DatabaseName + '_' + @Date + '.bak'''
SELECT @TSQL += ' WITH COMPRESSION, MAXTRANSFERSIZE = 4194304, BLOCKSIZE = 65536, CHECKSUM, FORMAT, STATS = 1;'
--SELECT @TSQL
EXEC (@TSQL)

Once the script was executed, we could see credential in SSMS and backup in Azure.

SQL SERVER - Backup to URL - Script to Generate Credential and Backup using Shared Access Signature (SAS) Backup-SAS-Script-04

SQL SERVER - Backup to URL - Script to Generate Credential and Backup using Shared Access Signature (SAS) Backup-SAS-Script-05

Hope this would help you in creating the script in an easier way.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Backup to URL – Script to Generate Credential and Backup using Shared Access Signature (SAS)

SQL SERVER – Backup to URL – Script to Perform Stripped Backup using Shared Access Signature (SAS)

$
0
0

Here are my previous blogs about Backup to URL, I have share scripts to take backup using Access Keys and SAS Token. One of my blog readers wanted to take stripped backup to Azure Blob. In this blog, I am sharing the script to perform a stripped backup using shared access signature (SAS Token)

SQL SERVER – Backup to URL – Script to Generate Credential and Backup using Shared Access Signature (SAS)
SQL SERVER – Msg 3292: A Failure Occurred While Attempting to Execute Backup or Restore With a URL Device Specified

WORKAROUND/SOLUTION

In this script, I am assuming that we already have credential created using an earlier blog. You need to provide is @StorageAccountName, @ContainerName, @DatabaseName and @NumberOfFiles which you need for striping. You can refer my earlier blogs to find those details from the Azure portal.

---- Backup To URL (using SAS Token and striping) :
--- =================================== --- 
DECLARE @Date AS NVARCHAR(25)
	,@TSQL AS NVARCHAR(MAX)
	,@ContainerName AS NVARCHAR(MAX)
	,@StorageAccountName AS VARCHAR(MAX)
	,@SASKey AS VARCHAR(MAX)
	,@DatabaseName AS SYSNAME
	,@NumberOfFiles AS INTEGER
	,@temp_Count AS INTEGER = 1;
SELECT @Date = REPLACE(REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR, GETDATE(), 100), '  ', '_'), ' ', '_'), '-', '_'), ':', '_');
SELECT @StorageAccountName = 'sqldbprodbackups'; --- Find this from Azure Portal
SELECT @ContainerName = 'dailybackups'; --- Find this from Azure Portal
SELECT @DatabaseName = 'master';
SELECT @NumberOfFiles = 5;-- Greater than 1
SELECT @TSQL = 'BACKUP DATABASE [' + @DatabaseName + '] TO '
WHILE @temp_Count <= @NumberOfFiles
BEGIN
	IF (@temp_Count != @NumberOfFiles)
	BEGIN
		SELECT @TSQL += 'URL = N''https://' + @StorageAccountName + '.blob.core.windows.net/' + @ContainerName + '/' + @DatabaseName + '_' + @Date + '_File_' + CONVERT(VARCHAR(10), @temp_Count) + '_of_'+ CONVERT(VARCHAR(10), @NumberOfFiles) + '.bak'','
	END
	ELSE
	BEGIN
		SELECT @TSQL += 'URL = N''https://' + @StorageAccountName + '.blob.core.windows.net/' + @ContainerName + '/' + @DatabaseName + '_' + @Date + '_File_' + CONVERT(VARCHAR(10), @temp_Count) + '_of_'+ CONVERT(VARCHAR(10), @NumberOfFiles) + '.bak'''
	END
	SET @temp_Count = @temp_Count + 1
END
SELECT @TSQL += ' WITH COMPRESSION, MAXTRANSFERSIZE = 4194304, BLOCKSIZE = 65536, CHECKSUM, FORMAT, STATS = 1;'
--SELECT (@TSQL)
EXEC (@TSQL)
--- =================================== ---

As soon as the backup is complete, I could see 5 files in the blog storage, that means the script is working as expected.

SQL SERVER - Backup to URL - Script to Perform Stripped Backup using Shared Access Signature (SAS) sas-stripped-backup-01

Hope this script would someone who wants to strip automated backup. Feel free to modify and use it. Let me know if you have other scripts which you use. Share with the world via the comments section.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Backup to URL – Script to Perform Stripped Backup using Shared Access Signature (SAS)

SQL SERVER – Backup Error: 3636 – An Error Occurred While Processing BackupMetadata

$
0
0

SQL SERVER - Backup Error: 3636 - An Error Occurred While Processing BackupMetadata errorbackup Recently one of my old clients contacted me and informed that they are having an issue with database backup on the secondary replica. In this blog we would learn about how to fix error 3636 – An error occurred while processing BackupMetadata.

I have provided consultancy to this customer to deploy Always On Availability Group. We have designed backups in such a way that log backup happens on the secondary replica. This was working well but suddenly they started seeing this error.

An error occurred while processing ‘BackupMetadata’ metadata for database id 11 file id 1. [SQLSTATE 42000] (Error 3636)
Inconsistent metadata has been encountered. The only possible backup operation is a tail-log backup using the WITH CONTINUE_AFTER_ERROR or NO_TRUNCATE option.

Above error has two error messages.

  1. Error 3636 – An error occurred while processing ‘%ls’ metadata for database id %d file id %d.
  2. Error 3046 – Inconsistent metadata has been encountered. The only possible backup operation is a tail-log backup using the WITH CONTINUE_AFTER_ERROR or NO_TRUNCATE option.

I asked more about the background of the issue and history. The had an interesting situation. One of the available databases showed enormous growth in the LDF file and they went ahead and added a new drive and additional file to mitigate the situation. This huge transaction was unavoidable for their business. When we looked at timings, we found that backup failure started after adding a new LDF file on the primary.

When I checked sys.master_files on primary and secondary, it was clear that secondary didn’t have the file yet. This was because the huge transaction was still getting replayed on the secondary replica. The REDO queue size of around 500 GB.

WORKAROUND/SOLUTION

The error message appeared because the transaction which added the new file on primary replica didn’t apply yet on the secondary. We waited for redo queue to drain on this replica. As soon as we saw the same information about the files in the database, the log backup executed fine.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Backup Error: 3636 – An Error Occurred While Processing BackupMetadata

SQL SERVER – FIX: Backup to URL Error: Operating System Error 50(The Request is Not Supported.)

$
0
0

It is always fun to work with “Backup to URL” feature of SQL Server. The error messages which are raised are from Azure storage side and an SQLDBA won’t be able to understand the meaning. While I was working with my VM to learn something about Backup to URL feature, I realized that my backups were failing. In this blog, let us learn how to fix the back to URL error: Operating system error 50(The request is not supported.). Here are the exact messages which I was getting in ERRORLOG.

2018-08-17 00:58:22.85 spid125 Error: 18204, Severity: 16, State: 1.
2018-08-17 00:58:22.85 spid125 BackupDiskFile::CreateMedia: Backup device ‘https://sqlauthbackup.blob.core.windows.net/backupcontainer/sqlauthdb_ebd0fe66f91f43f199c3b52d803bb136_20180814005822-07.log’ failed to create. Operating system error 50(The request is not supported.).
2018-08-17 00:58:22.85 Backup Error: 3041, Severity: 16, State: 1.
2018-08-17 00:58:22.85 Backup BACKUP failed to complete the command BACKUP LOG sqlauthdb. Check the backup application log for detailed messages.

I have already blogged about the same error earlier where the cause was different.

SQL SERVER – Backup to URL error: Operating system error 50(The request is not supported.)

In the current situation, this was the managed backup which was configured using the Azure portal. Recently I generated a new SAS token and updated it in the credential. Since then it was failing with an error.

WORKAROUND/SOLUTION

I actually performed to update the new SAS token by copying/pasting the value in the SSMS (below UI)

SQL SERVER - FIX: Backup to URL Error: Operating System Error 50(The Request is Not Supported.) BackupUrl-err-50-1

It didn’t take much time to realize that I missed removing “?” symbol from the SAS token. The SAS token on the portal starts from “?sv” and while creating a credential, we need to remove “?” and start the value from “sv”

I have done the same in the script which is available on my earlier blog.

SQL SERVER – Backup to URL – Script to Generate Credential and Backup using Shared Access Signature (SAS)

Have you encountered a similar error and found some other cause? Please share via the comment section.

Reference: Pinal Dave (https://blog.SQLAuthority.com)

First appeared on SQL SERVER – FIX: Backup to URL Error: Operating System Error 50(The Request is Not Supported.)

SQL SERVER – Rebuild Index Job Failed – Error: 9002 – The Transaction Log for Database ‘PinalDB’ is Full Due to ‘LOG_BACKUP’

$
0
0

SQL SERVER - Rebuild Index Job Failed - Error: 9002 - The Transaction Log for Database 'PinalDB' is Full Due to 'LOG_BACKUP' wood-1 Sometimes we have an error in SQL Server job history which are not very accurate to find the cause of an issue. In this blog we would explore a situation where rebuild index job was failing with error 9002 – The transaction log for database ‘PinalDB’ is full due to ‘LOG_BACKUP’.

THE PROBLEM

In this situation, my client wanted to know the cause of rebuild index job failure. The complete error message in the job history was as follows:

Executing the query “ALTER INDEX [PK_auditID] ON [dbo].[tbl_audi…” failed with the following error: “The transaction log for database ‘PinalDB’ is full due to ‘LOG_BACKUP’.

The statement has been terminated.”. Possible failure reasons: Problems with the query, “ResultSet” property not set correctly, parameters not set correctly, or connection not established correctly.

THE ROOT CAUSE ANALYSIS APPROACH

If we look at the message it is clear that LDF file was full and that cause the rebuild index job to fail. My client informed that they are taking regular log backup so why LDF is full due to log backup reason? As a normal way to find the cause, I always ask to see SQL Server ERRORLOG and look at an interesting message at the same time when an issue was reported. If you are not familiar with SQL Server ERRORLOG then you must read my earlier blog on the same topic.

SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location

In ERRORLOG, we could see below messages.

2018-09-06 05:00:07.36 Backup Log was backed up. Database: PINALDB, creation date(time): 2017/06/07(11:11:11), first LSN: 3384654:6547:1, last LSN: 3384654:6550:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘J:\Backup\PINALDB\PINALDB_backup_2018_09_06_050005_5978479.trn’}). This is an informational message only. No user action is required.
2018-09-06 05:51:10.57 spid160 Error: 9002, Severity: 17, State: 2.
2018-09-06 05:51:10.57 spid160 The transaction log for database ‘PINALDB’ is full due to ‘LOG_BACKUP’.
2018-09-06 06:00:01.76 spid16s Error: 9002, Severity: 17, State: 2.
2018-09-06 06:00:01.76 spid16s The transaction log for database ‘PINALDB’ is full due to ‘LOG_BACKUP’.
2018-09-06 06:00:01.76 spid16s Could not write a checkpoint record in database PINALDB because the log is out of space. Contact the database administrator to truncate the log or allocate more space to the database log files.
2018-09-06 06:00:01.77 spid16s Error: 5901, Severity: 16, State: 1.
2018-09-06 06:00:01.77 spid16s One or more recovery units belonging to database ‘PINALDB’ failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.

Last two error messages giving more information and confirming that we were running out of log space. Further, I looked into Event Log and found a breakthrough message there.

09/06/2018 05:51:39 AM   Warning       2013    srv      The M: disk is at or near capacity.  You may need to delete some files.

The time is EXACTLY the same as a message in SQL ERRORLOG. I checked their database configuration using sys.database_files and found that M drive has only LDF file for this database.

Using the above information, I was able to provide Root Cause of the issue. Have you been into a situation where you have to look at multiple logs to find the root cause?

Reference: Pinal Dave (https://blog.SQLAuthority.com)

First appeared on SQL SERVER – Rebuild Index Job Failed – Error: 9002 – The Transaction Log for Database ‘PinalDB’ is Full Due to ‘LOG_BACKUP’

SQL SERVER – Monitor Estimated Completion Times for Backup, Restore and DBCC Commands

$
0
0

Earlier this week, we were fortunate to receive an amazing script to overview HADR / AlwaysOn Local Replicate server from SQL Server Expert Dominic Wirth. You can read the amazing script here: Scripts to Overview HADR / AlwaysOn Local Replica Server. As a follow up to the previous conversation today I received another extremely helpful script from Dominic Wirth which I will be using with my customers every single day. In this blog post, we will see the script which will Monitor Estimated Completion Times for Backup, Restore and DBCC Commands.

In the real world, we all love the progress bar. The matter of fact, progress bar helps a lot of us psychologically. They play a very important role in reducing the anxiety of users. While working with lots of database and long-running operations we often have no idea how long any of the operations will last and often DBAs and Devs spend hours glaring at the blank screen of SSMS.

SQL SERVER - Monitor Estimated Completion Times for Backup, Restore and DBCC Commands progressbar

Dominic has created a wonderful script which will monitor the progress and estimated completion times for the following three important operations in SQL Server

  • Database Backup
  • Database Restore
  • DBCC Commands

Let us see the script which Monitor Estimated Completion Times for Backup, Restore and DBCC Commands.

/*==================================================================
Script: Monitor Backup Restore Dbcc.sql
Description: This script will display estimated completion times
and ETAs of Backup, Restore and DBCC operations.
Date created: 13.09.2018 (Dominic Wirth)
Last change: -
Script Version: 1.0
SQL Version: SQL Server 2008 or higher
====================================================================*/
SELECT Req.percent_complete AS PercentComplete
,CONVERT(NUMERIC(6,2),Req.estimated_completion_time/1000.0/60.0) AS MinutesUntilFinish
,DB_NAME(Req.database_id) AS DbName,
Req.session_id AS SPID, Txt.text AS Query,
Req.command AS SubQuery,
Req.start_time AS StartTime
,(CASE WHEN Req.estimated_completion_time < 1
THEN NULL
ELSE DATEADD(SECOND, Req.estimated_completion_time / 1000, GETDATE())
END) AS EstimatedFinishDate
,Req.[status] AS QueryState, Req.wait_type AS BlockingType,
Req.blocking_session_id AS BlockingSPID
FROM sys.dm_exec_requests AS Req
CROSS APPLY sys.dm_exec_sql_text(Req.[sql_handle]) AS Txt
WHERE Req.command IN ('BACKUP DATABASE','RESTORE DATABASE') OR Req.command LIKE 'DBCC%';

SQL SERVER - Monitor Estimated Completion Times for Backup, Restore and DBCC Commands progressdb

Dominic’s original script had an interesting use of IIF as well, however, to keep this script simple, I have removed that line and kept the alternative line which he had provided using a CASE statement. I am totally impressed by how thorough his work is in all reality.

Once again thanks Dominic for such selfless efforts to help the community with your amazing scripts.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Monitor Estimated Completion Times for Backup, Restore and DBCC Commands

Where is the Default Settings for Backup Compression? – Interview Question of the Week #191

$
0
0

Question: Where is the Default Settings for Backup Compression?

Answer: I think before I continue more on this blog post, I must show you an image which has created this question.

Where is the Default Settings for Backup Compression? - Interview Question of the Week #191 defaultcompression

Please look at the following database settings screen. You can go to this screen by right clicking on any particular database, go to tasks and select Back Up

Where is the Default Settings for Backup Compression? - Interview Question of the Week #191 defaultcompression1

On this screen, you will see three options for Backup Compression. Quite a lots of people see this setting and wonder where actually the default server settings are.

Well, the server settings are visible by right-clicking on the server instance name. Go to Properties and select Database Settings.

Under the database settings screen, there is a section of Backup and restore where you will see a small checkbox of Compress backup. If you select this checkbox, it will change the default of the compressed backup settings.

Where is the Default Settings for Backup Compression? - Interview Question of the Week #191 defaultcompression2

You can also check the default backup compression settings by running following T-SQL.

SELECT
CASE [value]
WHEN 1 THEN 'Backup Compression On'
ELSE 'Backup Compression '
END AS [Backup Compression Default]
FROM sys.configurations
WHERE name = 'backup compression default'
GO

Additionally, if you want to enable or disable backup compression using T-SQL you can run the following script.

To set the default compression on by default, run following command.

EXEC sys.sp_configure N'backup compression default', N'1'
GO
RECONFIGURE WITH OVERRIDE
GO

To set the default compression off by default, run following command.

EXEC sys.sp_configure N'backup compression default', N'0'
GO
RECONFIGURE WITH OVERRIDE
GO

More than interview question, I think this is interesting information which every DBA should know about their server

Reference: Pinal Dave (https://blog.SQLAuthority.com)

First appeared on Where is the Default Settings for Backup Compression? – Interview Question of the Week #191


SQL SERVER – Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts

$
0
0

Let me answer the first question first – There is no way to backup and restore SQL Server 2017 to SQL Server 2005 with the help of any tools. In this blog post, we will learn about how we can generate scripts to achieve this task.

However, there is a small workaround if you really want to restore SQL Server 2017 to SQL Server 2005. Recently during Comprehensive Database Performance Health Check asked me a question that if there is any way they can restore one of their databases from SQL Server 2017 to SQL Server 2005. Let us see how I was able to do help my customer.

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genescript

The only way we can restore SQL Server 2017 to SQL Server 2005 is to generate a script for Schema as well as data using SSMS. Let us see with the help of images, how we can do that.

First go to SSMS >> Database >> Right Click on it and go to Tasks >> Go to Generate Scripts…

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript1

Continue forward with the Generate and Publish Scripts Wizards.

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript2

Select the option to either select independent objects or select the entire database.

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript3

This is one of the most critical steps. Here you can specify for which version of SQL Server you are generating your scripts for.

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript4

Next, select the option of Schema and Data.

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript5

Now once again, you need to select necessary options over here. I usually enable all the options as the default selection does not script every single object in the database.

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript6

Once you hit OK. It will bring you to the summary screen. Over here click Next. Please note the location where it is going to generate the scripts.

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript7

Now the next screen will start generating schema and data.

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript8

That’s it. We are done!

SQL SERVER - Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts genscript9

Once you generate the script go to the location where you have saved the script and run that in SSMS. You will be successfully able to restore your database from SQL Server 2017 to SQL Server 2005.

 If you are familiar with any other method do let me know and I will be happy to publish that on this blog with the due credit.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Restoring SQL Server 2017 to SQL Server 2005 Using Generate Scripts

SQL SERVER – Error: 45207 – SQL Server Managed Backup to Microsoft Azure Cannot Configure the Database Because a Container URL was Either not Provided or Invalid

$
0
0

While using SQL Server on Virtual Machines in Azure, I ran into an interesting error.  In this blog we would learn how to fix Msg 45207 – SQL Server Managed Backup to Microsoft Azure cannot configure the database ‘sqlauthdb’ because a container URL was either not provided or invalid. It is also possible that your SAS credential is invalid.

When I deployed SQL Server Azure Virtual machine, I enabled a feature called “Automatic Backup”. Due to this setting SQL Server was taking regular backups on blob storage. Since I am not running a production server, I decided to minimize the cost by deleting the unwanted resource. So, I delete the storage account. Now I noticed that SQL Server started giving an error in SQL ERRORLOG about backup failures, so I decided to disable this feature.

When I did from the Azure portal, the disabling operation failed with below error:

Error type

At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details. (Code: DeploymentFailed)

Error details

The resource operation completed with terminal provisioning state ‘Failed’. (Code: ResourceDeploymentFailure)

VM has reported a failure when processing extension ‘SqlIaasExtension’. Error message: “SQL Server IaaS Agent: SQL Server Managed Backup to Microsoft Azure cannot configure the database ‘sqlauthdb’ because a container URL was either not provided or invalid. It is also possible that your SAS credential is invalid. ;The creator of this fault did not specify a Reason.;Automated Patching: Automated Patching enabled: False, Windows Update state: ScheduledInstallation, VM is up to date in applying important updates.;Automatic Telemetry: Performance Collector State: DisabledOptedOut”. (Code: VMExtensionProvisioningError)

Here is the screenshot for the same error.

SQL SERVER - Error: 45207 - SQL Server Managed Backup to Microsoft Azure Cannot Configure the Database Because a Container URL was Either not Provided or Invalid disable-auto-bkp-01

I then decided to disable from SQL Server using managed backup related stored procedure. I executed below code:

EXEC managed_backup.sp_backup_config_basic  
	@enable_backup=0 
	,@database_name = 'sqlauthdb'

Well, it failed with an exact same error which we got from the portal.

Msg 45207, Level 17, State 11, Procedure managed_backup.sp_add_task_command, Line 102 [Batch Start Line 8]

SQL Server Managed Backup to Microsoft Azure cannot configure the database ‘sqlauthdb’ because a container URL was either not provided or invalid. It is also possible that your SAS credential is invalid.

There is nothing wrong with the error message. I have deleted the storage account, so URL is definitely invalid. What should I do now?

I had two choices:

  1. Create same storage account again with new SAS token given in SQL credential so that SQL can connect to storage and disable it.
  2. Find a way to cleanup all settings related to managed backup in SQL Server.

I am a lazy guy and I wanted to get things done by choice # 2.

WORKAROUND/SOLUTION

While looking at msdb stored procedures, I came across an interesting procedure. (thanks to intellisense feature of SSMS) autoadmin_metadata_delete

SQL SERVER - Error: 45207 - SQL Server Managed Backup to Microsoft Azure Cannot Configure the Database Because a Container URL was Either not Provided or Invalid disable-auto-bkp-02

When I looked at the code of the stored procedure, it says below

— Procedure to delete entries in metadata tables

Perfect! This is what I was looking for. Here is the code which I ran, and it magically cleaned up everything.

NOTE: I must mention that you should use this with caution in production as it deletes everything about managed backup for all databases. Also, it is not documented on MDSN so Microsoft might remove it later from the product.

Reference: Pinal Dave (https://blog.SQLAuthority.com)

First appeared on SQL SERVER – Error: 45207 – SQL Server Managed Backup to Microsoft Azure Cannot Configure the Database Because a Container URL was Either not Provided or Invalid

SQL SERVER – SQL Server Management Studio Crash While Using Backup to URL or Connecting to Storage

$
0
0

Recently, as a part of my on-demand consulting, I was helping a client who was into a disaster situation and there was an urgent need to restore the backup of a database which was taken in Azure Blob Storage. In this blog, we will learn about how to fix the crash of SQL server Management studio while using a backup to URL or connecting to storage.

As explained earlier, the client was trying to restore the database backup which was stored in Azure blob storage. They were going to “Restore Database” menu option in SSMS, choosing the device as URL. As soon as they click on the add button, SQL Server Management Studio was crashing. Here is a screenshot.

SQL SERVER - SQL Server Management Studio Crash While Using Backup to URL or Connecting to Storage ssms-crash-url-01

Here are the details which you could see by clicking on “view problem details” button.

Problem signature:
Problem Event Name: CLR20r3
Problem Signature 01: Ssms.exe
Problem Signature 02: 2014.120.5571.0
Problem Signature 03: 5a56a398
Problem Signature 04: Microsoft.SqlServer.RegSvrEnum
Problem Signature 05: 12.0.5000.0
Problem Signature 06: 5764ad48
Problem Signature 07: 7
Problem Signature 08: da
Problem Signature 09: System.Exception
OS Version: 6.3.9600.2.0.0.400.8
Locale ID: 1033
Additional Information 1: e01e
Additional Information 2: e01e71249cad1577f3cd863e8d1ab175
Additional Information 3: 1edb
Additional Information 4: 1edbb4ca04145d7b8df23b25a086703c

Above information could not help in finding the cause but I have shared here so that someone can search and reach to this blog.

Then I tested the same steps on my SQL Server Management Studio and it was asking to connect to a storage account from where the backup can be picked for restore purpose. So, I asked my client to connect to storage directly by using SQL server Management studio using the below option.

SQL SERVER - SQL Server Management Studio Crash While Using Backup to URL or Connecting to Storage ssms-crash-url-02

Strangely, we could see the same crash of SQL Server Management studio there as well. This test confirmed that there is some issue with the information stored by SQL Server Management Studio about the storage account. My client has been using the same SSMS and in the past, he was able to connect to Azure storage.

WORKAROUND/SOLUTION

I have asked my client to use some different machine and try to connect to the Azure storage from that machine. Interestingly, it worked, and it confirmed our theory.

Now we need to figure out, how to clean up the information stored about storage account in the cache of SSMS. I was not able to figure out the way to clean up only Azure storage-related information from the user profile. So, I ended up in removing complete saved settings of SSMS by deleting a sqlstudio.bin file from %appdata% user profile. This file “sqlstudio.bin” is located under below location. (Go to Start > Run and paste the below path)

%AppData%\Microsoft\SQL Server Management Studio

Once the folder is opened on your server, you might see a folder like below.

SQL SERVER - SQL Server Management Studio Crash While Using Backup to URL or Connecting to Storage ssms-crash-url-03

Based on the SSMS version, go inside the folder and delete “sqlstudio.bin” file. (You can also rename the file and SSMS would create a new one).

Number SQL Server Management Studio (SSMS) Version
11.0 SQL Server 2012
12.0 SQL Server 2014
13.0 SQL Server 2016
14.0 SQL Server 2017
18.0 SSMS 18.0 (separated from SQL Server)

Please keep in mind that, this is not smartest solutions available as it would delete all saved information in SSMS (like username password, server names list, any settings which you have changed etc.)

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – SQL Server Management Studio Crash While Using Backup to URL or Connecting to Storage

SQL SERVER – Finding Compression Ratio of Backup

$
0
0

During the recent Comprehensive Database Performance Health Check, I was asked if there is any way to know compression ration of backup if we enable compressed backup settings in SQL Server.

As you know that a compressed backup is smaller than an uncompressed backup of the same data, compressing a backup typically requires less device I/O and therefore usually increases backup speed significantly.

Let us see a quick script which will estimate how much compression of the database will happen if we have enabled backup compression settings for SQL Server

SELECT database_name, backup_size, compressed_backup_size,
backup_size/compressed_backup_size AS CompressedRatio
FROM msdb..backupset; 

When you run above script it will return results like the following:

SQL SERVER - Finding Compression Ratio of Backup compression

If you look at the above script you can see the name of the database the backup size as well as compressed backup size estimation. Based on this, you can figure out the compression ration.

Here are the few related articles:

Let me know if you have any other script which can help us to figure out the compression ration of the backup. I will post the blog with due credit to you.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Finding Compression Ratio of Backup

SQL SERVER – What is the Meaning of PREEMPTIVE_HTTP_EVENT_WAIT? How to Fix it?

$
0
0

While playing with my SQL Server in Azure VM, I faced an interesting issue.  I did some change in networking to avoid internet access by creating an outbound rule and then I started facing this issue. In this blog, I would explain one of the possible causes of PREEMPTIVE_HTTP_EVENT_WAIT.

THE SITUATION

I have a SQL Server in Azure Virtual Machine. For learning purpose, I wanted to block internet access on the Virtual machine. So, I created an outbound rule to block internet access with “deny” option – as shown below.

SQL SERVER - What is the Meaning of PREEMPTIVE_HTTP_EVENT_WAIT? How to Fix it? prem-01

Above rule works fine and I was able to achieve what I wanted. I was unable to open any site, including google.com

SQL SERVER - What is the Meaning of PREEMPTIVE_HTTP_EVENT_WAIT? How to Fix it? prem-02

THE PROBLEM

Since I blocked outbound internet access, my backups to URL started to give me trouble. Whenever I start back up, it runs for a long time (which used to finish in a few seconds). I executed below query to find out what is happening in SQL and found below.

The two wait types are

  • BACKUPTHREAD
  • PREEMPTIVE_HTTP_EVENT_WAIT

SQL SERVER - What is the Meaning of PREEMPTIVE_HTTP_EVENT_WAIT? How to Fix it? prem-03

I could not find any documentation for PREEMPTIVE_HTTP_EVENT_WAIT but from the wait, it looks like this wait would not end by itself like they way SQL threads work. It also seems like its waiting for some http request which must have gone out to storage due to my backup to https. Here is the backup command.

BACKUP DATABASE SQLAuthDB TO  
URL = N'https://sqlauthority.blob.core.windows.net/backupcontainer/SQLAuthDB.bak'
GO

As you can see, I am taking backup of database using Backup to URL feature and it should go to my storage account. Since the backup would also go out of the machine, it would use the internet, which I blocked on this Virtual Machine. I waited for around 20 minutes and finally backup failed with below message.

Msg 3201, Level 16, State 1, Line 1
Cannot open backup device ‘https://sqlauthority.blob.core.windows.net/backupcontainer/SQLAuthDB.bak’. Operating system error 50(The request is not supported.).
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

Now, I know that I broke backup to URL by blocking internet? Does it mean I must have internet access on the Virtual machine? Or there is something else I can do?

THE SOLUTION

After my search on the internet, I found two ways to solve this by keeping the existing rule.

  1. Open specific port and put the rule priority lesser than internet rule. In our case, we are using https which uses 443 port.
  2. Add an outbound rule for storage and allow connections to go out even though the internet is blocked.

Option 2 is better as it takes care of port number automatically. So, I change my internet rule priority to 200 and added a new rule with destination as “Storage”

SQL SERVER - What is the Meaning of PREEMPTIVE_HTTP_EVENT_WAIT? How to Fix it? prem-04

…after this, google.com was not opening but backup to URL started working!

SQL SERVER - What is the Meaning of PREEMPTIVE_HTTP_EVENT_WAIT? How to Fix it? prem-05

So now, whenever you see PREEMPTIVE_HTTP_EVENT_WAIT, start checking about what kind of query is going to the internet. If it’s a backup to URL then you know the answer now.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – What is the Meaning of PREEMPTIVE_HTTP_EVENT_WAIT? How to Fix it?

SQL SERVER – Small Backup for Large Database

$
0
0

Recently, I had a very interesting experience with one of my customer while working with them on Comprehensive Database Performance Health Check. The issue was related to small backup for the large database and it was so much interesting I decided to share with all of you.

Real World Customer Story

While working together with their health check of the server, a client asked me that they believe there is some issue going on with one of their databases. The size of their database was in over 900 GB but the backup of the entire database was less than 300 MB.

The customers were really worried about the smaller size of the backup and were suspicious that backup did not contain all the data. They asked me to look into this and I had a very interesting finding for this scenario.

Empty Big Log File

After carefully look into their database, I realize that they had a big log file which was pretty much empty and due to same reasons, they were under impression that their database is very big but when the backup was happening it did not contain the empty part of the log file and that was the reason for the smaller backup file.

Recreate the Scenario

Let us re-create the scenario.

CREATE DATABASE [BigLog]
ON PRIMARY
( NAME = N'BigLog', FILENAME = N'D:\BigLog.mdf' , SIZE = 8192KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
LOG ON
( NAME = N'BigLog_log', FILENAME = N'D:\BigLog_log.ldf' , SIZE = 8192000KB , MAXSIZE = 2048GB , FILEGROWTH = 1024KB )
GO

If you go and check the size of the file in the explorer you will notice that log file is of 8 GB big. However, as we have just created the database and have not populated them, the entire database is technically empty.

Now let us take backup of the entire database using the following script.

BACKUP DATABASE [BigLog] TO DISK = N'D:\BigLog.bak' WITH STATS = 10
GO

Now let us go and check the size of the backup file.

SQL SERVER - Small Backup for Large Database smallbackup

You will notice that even though the size of the database is overall very big when SQL Server takes a backup, it remove the empty space and take backup of the data only. This is indeed a very good feature otherwise, the size of the backup will be unnecessarily big.

The Best Practices

As per the best practices, it is always a good idea to Shrink your log file before taking the full backup as it will remove the unnecessary empty space in the log backup. This will be very helpful when you try to restore the database. When you try to restore your database from the tiny backup it will eventually create your data and log file to the original size when the database backup.

This means if you do not shrink your log file before you take the full backup, when you restore the database, it will create the log file with the empty space and it will essentially waste your important drive space.

Your Turn

Do let me know if you find this story interesting or not. I have many such stories I will be happy to share based on your feedback. Additionally, if you have any such interesting stories with your customers or at a workplace, do share with me in the comment and I will publish it with due credit to you.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Small Backup for Large Database

How to Shrink All the Log Files for SQL Server? – Interview Question of the Week #203

$
0
0

Question: How to Shrink All the Log Files for SQL Server?

Answer: This question was asked recently after reading my latest blog post here: SQL SERVER – Small Backup for Large Database. Before you continue reading this blog post, let me stress on a couple of details. First of all – I am no way encouraging you to shrink your database and particularly data files. Shrinking the database can be very very bad for your SQL Server’s Performance and it can be extremely costly to you eventually.

How to Shrink All the Log Files for SQL Server? - Interview Question of the Week #203 shrinklog

The primary reason, one should consider shrinking the log file is right before the backup so whenever we restore the database, SQL Server does not have to recreate the large log file which actually takes up additional (unnecessary) space as well as increases the time to restore the database.

Now that we have clarified why one should shrink only the log file, let us see the script to shrink all the log files on the server.

DECLARE @ScriptToExecute VARCHAR(MAX);
SET @ScriptToExecute = '';
SELECT
@ScriptToExecute = @ScriptToExecute +
'USE ['+ d.name +']; CHECKPOINT; DBCC SHRINKFILE ('+f.name+');'
FROM sys.master_files f
INNER JOIN sys.databases d ON d.database_id = f.database_id
WHERE f.type = 1 AND d.database_id > 4
-- AND d.name = 'NameofDB'
SELECT @ScriptToExecute ScriptToExecute
EXEC (@ScriptToExecute) 

If you want to take shrink the log file of any particular database, you can uncomment the line which says NameofDB and can use the script to shrink only that one database. Let me know if you use any such script in a comment and I will be happy to post that on the blog with due credit to you.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on How to Shrink All the Log Files for SQL Server? – Interview Question of the Week #203


SQL SERVER – Cluster Install Failure – Code 0x84cf0003 – Updating Permission Setting for Folder Failed

$
0
0

SQL SERVER - Cluster Install Failure - Code 0x84cf0003 - Updating Permission Setting for Folder Failed SQL-Cluster There are various issues which I have seen SQL installation and most of the time they are intuitive. The error message is mostly helpful and provides the right direction. In this blog we would discuss error Updating permission setting for folder failed:

Here is the exact error which we could see in setup logs under the BootStrap folder.

Updating permission setting for folder ‘C:\ClusterStorage\FIN_Data\MSSQL\MSSQL13.MSSQLSERVER\MSSQL\DATA’ failed. The folder permission setting were supposed to be set to ‘D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266783-3050454056-335720097-2195381415)’.

Permission error occurs when you use a volume mount point in SQL Server Setup

My client was not installing it on “root” of the mountpoint. The complete message from the Detail.txt is shown below. (I have added line number and remove DateTime for better visibility)

  1. SQLEngine: : Checking Engine checkpoint ‘SetSecurityDataDir’
  2. SQLEngine: –SqlEngineSetupPrivate: Setting Security Descriptor D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926) on Directory C:\ClusterStorage\FIN_Data\MSSQL\MSSQL13.MSSQLSERVER\MSSQL\DATA
  3. Slp: Sco: Attempting to set security descriptor for directory C:\ClusterStorage\FIN_Data\MSSQL\MSSQL13.MSSQLSERVER\MSSQL\DATA, security descriptor D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  4. Slp: Sco: Attempting to normalize security descriptor D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  5. Slp: Sco: Attempting to replace account with sid in security descriptor D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  6. Slp: ReplaceAccountWithSidInSddl — SDDL to be processed: D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  7. Slp: ReplaceAccountWithSidInSddl — SDDL to be returned: D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  8. Slp: Prompting user if they want to retry this action due to the following failure:
  9. Slp: The following is an exception stack listing the exceptions in outermost to innermost order
  10. Slp: Inner exceptions are being indented
  11. Slp:
  12. Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.SqlDirectoryException
  13. Slp: Message:
  14. Slp: Updating permission setting for folder ‘C:\ClusterStorage\FIN_Data\MSSQL\MSSQL13.MSSQLSERVER\MSSQL\DATA’ failed. The folder permission setting were supposed to be set to ‘D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)’.
  15. Slp: HResult : 0x84cf0003
  16. Slp: FacilityCode : 1231 (4cf)
  17. Slp: ErrorCode : 3 (0003)

WORKAROUND/SOLUTION

We checked and made sure that service account was having below permission in security policy:

  • Act as Part of the Operating System
  • Bypass Traverse Checking
  • Lock Pages In Memory
  • Log on as a Batch Job
  • Log on as a Service
  • Replace a Process Level Token
  • Backup files and directories
  • Debug Programs
  • Manage auditing and security log

I gave all the possible permissions to the various account on the folders including “Full Control” to “Everyone”.

At last, we found that this was due to “Audit Object Access” policy, which was enabled from domain controller via GPO. Once we disabled it, the installation went fine.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Cluster Install Failure – Code 0x84cf0003 – Updating Permission Setting for Folder Failed

SQL SERVER – New Parallel Operation Cannot be Started Due to Too Many Parallel Operations Executing at this Time

$
0
0

One of my clients contacted me via my On Demand (50 Minutes) offering and they were seeing below error message in ERRORLOG – New parallel operation cannot be started due to too many parallel operations executing at this time.

SQL SERVER - New Parallel Operation Cannot be Started Due to Too Many Parallel Operations Executing at this Time parallelerror

Here is the complete message which was seen in the SQL Server ERRORLOG file.

New parallel operation cannot be started due to too many parallel operations executing at this time. Use the “max worker threads” configuration option to increase the number of allowable threads, or reduce the number of parallel operations running on the system.

I asked for complete ERRORLOG and Event Logs to look it from all possible angles.

  • Error: 18210, Severity: 16, State: 1.
  • BackupIoRequest::ReportIoError: write failure on backup device ‘{BD27F651-5DB3-4CB4-9615-9FDEC4D8EECE}331’. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).

At the same time, I found below in event log

  • SQLVDI: Loc=TriggerAbort. Desc=invoked. ErrorCode=(0). Process=1724. Thread=2300. Server. Instance=MSSQLSERVER. VD=Global\{FFC0C1C0-25D9-4A90-82B3-3ABBCFEA9476}247_SQLVDIMemoryName_0.
  • BACKUP failed to complete the command BACKUP DATABASE ReportingService_Alerting. Check the backup application log for detailed messages.
  • SQLVDI: Loc=TriggerAbort. Desc=invoked. ErrorCode=(0). Process=1724. Thread=9844. Server. Instance=MSSQLSERVER. VD=Global\{FFC0C1C0-25D9-4A90-82B3-3ABBCFEA9476}45_SQLVDIMemoryName_0.
  • SQLVDI: Loc=TriggerAbort. Desc=invoked. ErrorCode=(0). Process=1724. Thread=8176. Server. Instance=MSSQLSERVER. VD=Global\{FFC0C1C0-25D9-4A90-82B3-3ABBCFEA9476}264_SQLVDIMemoryName_0.

From above it was clear that they are not taking native backups of SQL Server database using a maintenance plan of T-SQL. The backups are being taken using SQLVDI via 3rd party software.

WORKAROUND/SOLUTION

Looking at the messages, it sounded like the error appears at the same time when a backup is running. Further investigation showed that they had more than 1000 databases and they were getting backed up via 3rd party tool at the same time. I suggested them to

  1. Talk to back up team and find a way to stagger the backups. This would not cause too many parallel threads for backups at the same time.
  2. Increase the number of worker thread to avoid message in ERRORLOG.

Once they reduce many concurrent backups, error stopped.

Have you seen such messages earlier? What did you do to solve? Please share via comments with others in the SQL community.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – New Parallel Operation Cannot be Started Due to Too Many Parallel Operations Executing at this Time

SQL SERVER – FIX: 3637 – A Parallel Operation Cannot be Started From a DAC Connection

$
0
0

SQL SERVER - FIX: 3637 - A Parallel Operation Cannot be Started From a DAC Connection statingerror This was indeed one of an interesting error which I have never seen earlier and it is about DAC Connection.

Here is the complete error message which my client reported.

Msg 3637, Level 16, State 3, Line 1
A parallel operation cannot be started from a DAC connection.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

By looking at the error message, I asked him how he is connecting to SQL Server and where exactly the error is appearing.

They informed that they have scheduled a backup job from Windows. Since it is an SQL Express edition, they created a batch file to take backup of the database. The batch file was running as a scheduled task via windows task scheduler. The error appears when a backup is initiated.

I asked them to show the batch file which is taking backup and it didn’t take much time to find what is going wrong while taking backup.

I was able to reproduce the error in my local lab environment as well.

SQL SERVER - FIX: 3637 - A Parallel Operation Cannot be Started From a DAC Connection dac-bkp-err-01

WORKAROUND/SOLUTION

To reproduce the error, I connected to SQL Server via SQL Server Management Studio and gave the server name as “Admin:ServerName”. This caused the connection to go as Dedicated Admin Connection or DAC. This is not a normal connection to SQL Server via 1433 port. There are some limitations of DAC connection and unable to take a backup is one of them.

In my client’s situation, the .bat file had below.

SQLCMD -S <ServerName> -E -A -I”<path to backup script>

When I looked at the documentation, it says  [-A dedicated admin connection] and that explained everything. Here are my test results, with -A and without -A and you can see the difference.

SQL SERVER - FIX: 3637 - A Parallel Operation Cannot be Started From a DAC Connection dac-bkp-err-02

In short, I asked the customer to change the script to remove “A” parameter to avoid DAC connection and since then the backups are running like a charm.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: 3637 – A Parallel Operation Cannot be Started From a DAC Connection

SQL SERVER – Stretch Database – ERROR: The Provided Location is Not Available for Resource Group

$
0
0

This was my first experiment with this feature called Stretch Database. I have started the wizard and encountered at the very end. Here is the error message.

SQL SERVER - Stretch Database - ERROR: The Provided Location is Not Available for Resource Group StrechDB_Reg_Err-01

And here is the text of error message.

Operation to create resource group stretchgroup-australiacentral failed. Details : {“error”:{“code”:”LocationNotAvailableForResourceGroup”,”message”:”The provided location ‘australiacentral’ is not available for resource group. List of available regions is ‘centralus,eastasia,southeastasia,eastus,eastus2,westus,westus2,northcentralus,southcentralus,westcentralus,northeurope,westeurope,japaneast,japanwest,brazilsouth,australiasoutheast,australiaeast,westindia,southindia,centralindia,canadacentral,canadaeast,uksouth,ukwest,koreacentral,koreasouth,francecentral,southafricanorth’.”}}

It also gave an option to read the logs and I found the same error there also.

  1. [Informational] TaskUpdates: Message:Task : ‘Provision Azure Sql Server stretchserver-stretchdbdemo-20190317-031938571’ — Status : ‘Started’ — Details : ‘Task ‘Provision Azure Sql Server stretchserver-stretchdbdemo-20190317-031938571′ started ….’.
  2. [Informational] TaskUpdates: Message:Task : ‘Provision Azure Sql Server stretchserver-stretchdbdemo-20190317-031938571’ — Status : ‘Running’ — Details : ‘Task failed due to following error: Microsoft.SqlServer.Management.StretchDatabase.Model.Tasks.CreateResourceGroupFailedException: Operation to create resource group stretchgroup-australiacentral failed. Details : {“error”:{“code”:”LocationNotAvailableForResourceGroup”,”message”:”The provided location ‘australiacentral’ is not available for resource group. List of available regions is ‘centralus,eastasia,southeastasia,eastus,eastus2,westus,westus2,northcentralus,southcentralus,westcentralus,northeurope,westeurope,japaneast,japanwest,brazilsouth,australiasoutheast,australiaeast,westindia,southindia,centralindia,canadacentral,canadaeast,uksouth,ukwest,koreacentral,koreasouth,francecentral,southafricanorth’.”}}

at Microsoft.SqlServer.Management.StretchDatabase.Model.Tasks.ProvisionSqlAzureServerTask.CreateNewResourceGroup(ResourceManagement resourceManagementChannel, ServiceOperationStatus& status)
at Microsoft.SqlServer.Management.StretchDatabase.Model.Tasks.ProvisionSqlAzureServerTask.Perform(IExecutionPolicy taskExecutionPolicy)
   at Microsoft.SqlServer.Management.StretchDatabase.Model.Common.Task.Perform(IExecutionPolicy policy, CancellationToken token, ScenarioTaskHandler taskDelegate), retrying …’.

WORKAROUND/SOLUTION

I must say the error is intuitive and tells that the location is not available. But I was wondering where did I choose the location? So, I launched the wizard again and found the place (highlighted below)

SQL SERVER - Stretch Database - ERROR: The Provided Location is Not Available for Resource Group StrechDB_Reg_Err-02

The problem here is that the location came as the default (alphabetically first) and I didn’t pay attention to it.

As soon as I selected “South India” I was able to proceed and stretch the table to Azure.

SQL SERVER - Stretch Database - ERROR: The Provided Location is Not Available for Resource Group StrechDB_Reg_Err-03

 Have you tested this feature? Do you have any interesting learning?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Stretch Database – ERROR: The Provided Location is Not Available for Resource Group

Get-AzureStorageBlob: The Remote Server Returned an Error: (403) Forbidden. HTTP Status Code: 403 – HTTP Error Message: This Request is Not Authorized to Perform This Operation

$
0
0

I was working with one of my clients and they wanted to clean up old backup blobs from Azure. I have already blogged and shared my script. In this blog we would learn how to fix error: (403) Forbidden – This request is not authorized to perform this operation. SQL SERVER – PowerShell Script – Remove Old SQL Database Backup Files From Azure Storage

This was their on-premise SQL Server where the backups were taken directly to Azure Blob rather than local disk. I have modified the script as per their environment and used Automation Account and Run Book to schedule it.  They informed me that this was not working as expected. No 7 days old files were getting deleted!

I look at the output of runbook and was able to reproduce the error by a running script in PowerShell ISE.

Get-AzureStorageBlob: The Remote Server Returned an Error: (403) Forbidden. HTTP Status Code: 403 - HTTP Error Message: This Request is Not Authorized to Perform This Operation clean-blob-01

Here is the text of the message

Get-AzureStorageBlob : The remote server returned an error: (403) Forbidden. HTTP Status Code: 403 – HTTP Error Message: This request is
not authorized to perform this operation.
At line:1 char:9
+ $blobs= Get-AzureStorageBlob -Container $ContainerName -Context $cont …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Get-AzureStorageBlob], StorageException
+ FullyQualifiedErrorId : StorageException,Microsoft.WindowsAzure.Commands.Storage.Blob.Cmdlet.GetAzureStorageBlobCommand

I used Azure Storage Explorer to connect to storage via Access Key and that also failed.

Get-AzureStorageBlob: The Remote Server Returned an Error: (403) Forbidden. HTTP Status Code: 403 - HTTP Error Message: This Request is Not Authorized to Perform This Operation clean-blob-02

Interestingly, when I used Storage Explorer on the SQL Server machine, it was able to open.

WORKAROUND/SOLUTION

While there might be other reasons for the same error, my situation was unique. So, check this first and if it doesn’t solve the issue,

Since we were able to connect from SQL Server machine but not from others, I knew it would be someplace where they have blocked the access. I remembered such setting in the storage account, so I checked “Firewall and Virtual Networks” and found that they have allowed IP for SQL Server machine.

Get-AzureStorageBlob: The Remote Server Returned an Error: (403) Forbidden. HTTP Status Code: 403 - HTTP Error Message: This Request is Not Authorized to Perform This Operation clean-blob-03

Once I modified the setting as per their environment, the run book job executed fine and other backups were deleted.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on Get-AzureStorageBlob: The Remote Server Returned an Error: (403) Forbidden. HTTP Status Code: 403 – HTTP Error Message: This Request is Not Authorized to Perform This Operation

Viewing all 110 articles
Browse latest View live