I ran a query and it took about 5s to return. Then I ran the same query again and it returned in < 1s. I ran the query again with a different value in the where clause and it the query still returned in < 1s so I'm guessing that SS cached the query plan. Is this the proper terminology to describe what SS did? I was thinking that only sprocs cached the query plan but I guess that an SSMS session will do this as well. Or does this have to do with re-running the query within the same connection that was used to run the query the first time?I'm assuming that this also means that when the sproc runs that contains this SQL, the first execution will take 5s and subsequent calls will take < 1s. Does this seem like a correct assumption? In this case do you normally disregard the initial exec time of SQL and rely on the exec time after the query plan is cached? I seem to remember that there's some type of SS sproc that can be run to precompile the sproc and cache the query plan. Can you provide the name of that sys sproc? In the scenario I described, should I expect a < 1s return time on initial exec if I use the SQL as written but run the sys sproc I mentioned before exposing the sproc to prod?
↧
query plan cache analysis question
↧
tradeoffs in a sproc query design scenario?
I have a sproc that needs to return user data as well as perms data for that user. Returning only perms requires a 4-table join and a user may have 1500+ perms. Returning only user info requires a 2-table join.I could write this sproc as a single query which joins everything together. However, this would return 5 columns of redundant user info 1500 times. Alternatively, I could write this sproc as 2 separate queries where Q1 simply returns a single row of 5 columns of user data and Q2 returns 1500 rows of permission data.Hardcore SQL people I've worked with in the past tend to want to put everything into 1 big query but considering the additional size of redundant data that this would return over the network I'm thinking that the 2 separate queries I described would be more ideal.If user perms were limited to 10-20 I might consider putting everything in a single query but since the Q2 rowset is large I'm pretty sure I'm going to use 2 queries. Is this the approach you would lean towards as well?
↧
↧
EXEC INSERT INTO TABLE with PARAMETER
Anyone can help me .Is it possible to do this ..[code="sql"]DECLARE@ProjectCode varchar(25),@TypeCode varchar(25),@BomDate varchar(25),SET @ProjectCode = 'PRO000001'SET @TypeCode = 'PS0000001'EXEC ('INSERT INTO [dbo].[zPS-XSCN231L] ( [level], [Pin], [Description] ) VALUES ( @ProjectCode, @TypeCode, @BomDate ) ') [/code]
↧
enrollment figures between male and female players
[url=http://genglobal.org/united-states/foxt.v-jets-vs-redskins-li.ve-s-tre.am-preseason-nfl-week-onlineguide-..19-august-201]Jets vs Redskins Live Streaming[/url]
↧
sql serevr
declare @filexml xml='<IndividualSurvey xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/IndividualSurvey"> <TotalPurchaseYTD>4431.4</TotalPurchaseYTD> <DateFirstPurchase>2003-04-10Z</DateFirstPurchase> <BirthDate>1956-01-23Z</BirthDate> <MaritalStatus>S</MaritalStatus> <YearlyIncome>50001-75000</YearlyIncome> <Gender>F</Gender> <TotalChildren>2</TotalChildren> <NumberChildrenAtHome>0</NumberChildrenAtHome> <Education>High School</Education> <Occupation>Skilled Manual</Occupation> <HomeOwnerFlag>1</HomeOwnerFlag> <NumberCarsOwned>2</NumberCarsOwned> <CommuteDistance>5-10 Miles</CommuteDistance> </IndividualSurvey>' select doc.col.value('xmlns[1]','varchar(3000)') totalpurchaseid, doc.col.value('DateFirstPurchase[1]','varchar(30)') totalpurchaseid, doc.col.value('BirthDate[1]','varchar(30)') totalpurchaseid, doc.col.value('MaritalStatus[1]','varchar(30)') totalpurchaseid, doc.col.value('YearlyIncome[1]','varchar(30)') totalpurchaseid, doc.col.value('Gender[1]','varchar(30)') totalpurchaseid from @filexml.nodes('/IndividualSurvey') doc(col)
↧
↧
Case Study Scenario Exercise about SQL Server 2008 R2 instances.
A company offers financial services for its clients.For the company,when talking about client's financial transactions and their income,it is very important to not lose any of transactions due to a failure of an IT system. Every day is generated a large amount of data and fortheir management, the company uses SQL Server 2008 R2 instances.The company needs to build a system which does not allow data losseven if it happens a small failure of the system, any transaction whichis not committed from the client, should not lose.The company system should provide a high availability of data in everymoment and the company does not want to change the version of SQLServer instances because this would lead to additional cost.The main elements that will effect in a successful implementation of thecompany system are:[b]Infrastructure of the network and servers[/b] [b]Infrastructure used for storing data[/b] [b]Automatic failover[/b] [b]Copies of data shared in more than two servers,placed far from each other[/b] [b]Exploitation of data space through compression technique[/b][b]Provide a design of an hardware and software system infrastructure[/b] ofthe company which would provide the solution of each of the above requirements providing this way a successful implementation.
↧
Female players account for 39% of the total participation
[url=http://genglobal.org/united-states/foxppv-ufc-202-li.ve-s-tre.am-diaz-vs.-mcgregor-onlinegamecast-..20-august-2016]UFC 202 Live Streaming[/url]
↧
UPDATE TABLE USING SP_EXECUTEQUERY
Good afternoon,I Have problem with this case,anyone can help analyze or has another solution[code="sql"]DECLARE@TypeCode varchar(25),@BomDateB varchar(25),@BomDateA varchar(25),@TbName varchar(25),@SQL varchar(max)SET @TypeCode = 'PS-BPRG15AGW'SET @TbName = 'z'+@TypeCode SET @BomDateB = '8/19/2016'SET @BomDateA = '8/20/2016'SET @SQL = 'UPDATE [PMLite].[dbo].['+@TbName+'] SET [BOM Date] = '+@BomDateA+' WHERE [BOM Date] = '+@BomDateB+''EXEC sp_executesql @SQL[/code]
↧
Get table and database an index belongs to
Hi,I have an index and I am trying to find out which table and database it sits in.How can I do this without trawling through all of the indexes of a table?Thanks.
↧
↧
Unless the log is being backed up is there a reason for setting a databse to the BULK LOGGED recovery model
I understand the benefits of using FULL and or BULK Logged recovery models in as far as backup/restore are concerned but if the log file for a DB is not being backjed up then is there any reason for setting a DB recovery Model to BULK LOGGED or FULL versus using SIMPLE? For some reason the DB that is used by a software/service we use is regularly changed from FULL to BULK LOGGED and back to FULL again throughout the day. I can see it in SQL Servers log files and I can see it happening if I turn on Profiler and capture the activity. I found a Stored procedure inside the DB that consists SET RECOVERY commands and its called numerous times throughout the day. Before we contact the vendor about this I'd like to know if there is a SQL SERVER reason/benefit to doing this seperate from whatever reason the software vendor provides. We have never backed up the log file for this DB so I'm puzzled as to why this thing is constantly changing the DB's recovery model.
↧
Optimize update with an index
Hi Guys ,I am wondering how to optimize this below update statement as it probably cause table scan :Update Table1Set a=@a , b = @b , c =@cWhere id =@id and name=@name , file =@file , lastupdated = @lastupdated Should I create index on column in where criteria -> non clustered index on id, name and lastupdatedAny feedback are much appreciated Thank you
↧
Erros on a Linked Server
Hi, I put a database on a remote server a few weeks ago, and I access it by a Linked Server connection, but I encounter a mistake while processing jobs : Cannot obtain the schema rowset "DBSCHEMA_TABLES_INFO" for OLE DB provider "SQLNCLI10" for linked server "ServerA". The provider supports the interface, but returns a failure code when it is used. [SQLSTATE 42000] (Error 7311) 331I've got two servers, ServerA and ServerB, both are on SQL Server 2008R2 SP1 64bits, I execute remote querys via [ServerB].sp_executesql (and it works fine).I searched on Internet, I found issues with the same error but about linked SQL Server 2000 Vs 2008 (not applicable in my case). I also found clues on the linked server would not responded completely to the first one (it responds on applicative way but not in a process way).Is anyone has encounterd that problem yet ?Thanks a lot for your reply.
↧
How to stop this message in SQL Server Logs : FILESTREAM: effective level = 0...........
Hi,How to stop this message appearing in SQL Server Logs ?Message :FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'MSSQLSERVERMicrosoft SQL Server 2008 (SP1) - 10.0.2734.0Thank youCalico
↧
↧
Script to show me the status of all my jobs
Every day I log in to each of my servers and scroll down to Job Activity Monitor to check to see if all the jobs completed successfully or if I need to go into any of them and fix them up.I want to automate it. I know you can get SQL server to send an email if the job errors out but that currently isn't working. Maybe I should get that working, but I thought I'd like to have a script that I can run that simply shows me all the jobs on the server and their current status. So kinda like what the job Activity Monitor does, but scripted to output that upon running of the script.Is there a script already that does such a thing? Does anyone have any good tips or things that they do similar?Thanks.
↧
How is @pre_creation_cmd applied? And is Replication Monitor evil?
Hi,When a new transactional snapshot is being applied when is @pre_creation_cmd put into action?I've always assumed it's used per article and just before the article receives the new data. Thus leaving as much of the current data available for as long as possible so that other processes on the subscribers can continue.But after a long night of applying a new snapshot (<8gigs) I noticed that some tables were empty and some had data. It's seems that @pre_creation_cmd is being called to start with, in my case dropping and recreating each article before data transfer starts.By "Some had data" I mean, it was in the middle of the snapshot being applied and the subscriber does have extra tables.So is it used per article, just before filling the table or are all articles dropped and recreated at the start of applying the snapshot?Is Replication Monitor evil?As mentioned, I had a long night of applying small snapshot. Shouldn't have taken even 15 minutes, but ended up taking hours.Because it was the middle of the night and all subscribers had almost no activity, I can only assume the Replication Monitor was somehow holding things up.[url=https://www.brentozar.com/archive/2013/09/transactional-replication-change-tracking-data-capture/]Kendra Little[/url] mentions Replication Monitor blocking when it's used concurrently, but I was the only one online at the time and it is extremely unlikely that someone else happened to leave replication monitor open. Replication links:[url=https://msdn.microsoft.com/en-us/library/ms173857.aspx]sp_addarticle @pre_creation_cmd[/url][url=https://msdn.microsoft.com/en-us/library/ms151740.aspx]msdn FAQ for replication admins[/url]
↧
How do trigger events operate?
i have an data update script, where i create a temp table to populate with data of a table based on some joins and logic i want to update.I then join on that table to the original table based on its keys. (201110) records the base table has a trigger that writes to an audit table on inserts/updates to that table.I am seeing something strange I did a count of the table and see 201110 records were updated, but when i do a count of records in my audit table for those records(select count(*) form audit where table name = x) i see 1000300 (rerun) 1000358 (rerun 1000390).... so the audit records is catching up.my question is if i already have 201110 records in the base table (x) updated why is the audit count "Catching up" i thought that on an update the trigger would write to the Audit. 1 to 1 but the Audit is catching up?This update is within a transaction and try catch does that have anything to do with it?
↧
Converting a XML datatype to varchar(max).
Hello All,Datatype: XMLWith FOR XML a large string is build from tablecontent.The result is stored as a XML datatype in a table.Now I want to do some manipulations on the resulting string.So I convert the XML string to a VARCHAR(MAX) datatype.[b]But I do not get the 'normal' EOL symbols. What should I do to get the normal EOL symbols (char(13)+char(10)) in the resulting string?[/b]The XML can be a very large 'string'.Any handy solutions for the EOL symbols ?Ben
↧
↧
Partitioning worthwhile?
I have a database with around 120K records, filesize just over 500MB. I keep reading about partitioning, and have been considering trying it, but it seems a fair amount of work. Is it worth even considering for a database this size? It will continue to grow, but not in any extreme fashion - it has gone from 50K records to the current 120K over a period of about eight years, and there is no reason to expect any major growth spurts in the future. There is no strict archiving activity in it - anything recorded at any time during its life may be accessed by anyone at any time. The only somewhat reasonable feature by which to partition would be a catalog index letter, into around 20 segments of hugely varying size. One segment would have almost 40K records, a few would have less than 100. Users tend to concentrate their work into one or several catalog groups - one user may have several catalogs, so I would group all the catalogs belonging to one user into a single partition (if that's possible).Does this sound like it's even worth messing with? Current response time for most queries is sub-second, simultaneous users are generally no more than two or three, and even that rarely.
↧
Extensible Key Management (EKM) and SQL Server 2008 TDE Encryption Recommendations
Hello All,I am sure this topic has come up before and from my searching of previous material, i have gathered some great knowledge in this subject. I have been provided a task to research a budget friendly way to implement EKM (by HSM or possibly MS Certificate Store) and would like some advice from members that have been apart of the team to integrate or research this subject. Our requirements are quite simple- Separation of Duties between the DBA's and Network Staff (Key Managers)- Central management of Encryption Keys/CertificateWe are currently using TDE to encrypt our databases at rest with a single certificate used for all databases. I am aware we can generate a Database Encryption Key (DEK) for each database, however, the ability of the DBA staff to backup the certificate cert/p.key with any password they wish does not satisfy our requirements. With that said, here are some questions i would appreciate some insight on:1) Can we use MS Certificate Store to manage our certificates. i.e. Network team can generate a certificate through the store and provide this to the DBA staff on request basis. I understand this would be al ot of manual labor to log use etc.. but currently we only have one customer that requires such management practices. It is also a budget friendly option.2) Recommendation of an EKM/HSM solution. We have been doing our research on such solutions, however, if anyone has had experience with such tools, i would appreciate some insight and/or recommended product. Here are the ones we are reviewing:ARX’s Private Server (HSM) - http://www.arx.com/products/privateserver-hsm/Vormetric - http://www.vormetric.com/data-security-solutions/use-cases/MS-SQLTownsend’s Aliance Key Manager (HSM) - http://townsendsecurity.com/products/encryption-key-managementSafenet Key Management Software - http://www.safenet-inc.com/data-encryption/enterprise-key-management/Thanks for your assistance! ~ N
↧
Does anyone know Mike Byrd?
Hi,I've been trying to access some links within an article written by Mike Byrd on Change Tracking.One of the pages is: [url=http://logicalread.solarwinds.com/sql-server-change-tracking-bulletproof-etl-p1-mb01/][/url]Unfortunately, there appears to be an error when hyperlinking to the sql scripts.This is the same for all 3 parts of his article on change tracking.Does anyone know how I might get in contact with him, or if these articles and associated scripts are available anywhere else?
↧