SQL Server Data MIning – Data Mining Features #sql #server #hosting


Welcome to SQLServerDataMining.com

This site has been designed by the SQL Server Data Mining team to provide the SQL Server community with access to and information about our in-database data mining and analytics features. SQL Server 2000 was the first major database release to put analytics in the database. Catch up with the latest SQL Server Data Mining news in our newsletter.

SQL Server 2012 SP1 Data Mining Add-ins for Office (with 32-bit or 64-bit Support)

The Data Mining Add-ins allow you to harness the power of SQL Server 2012 predictive analytics in Excel and Visio and they have been updated to include 32-bit or 64-bit support for Office 2010 or Office 2013. Use Table Analysis Tools to get insight with a couple of clicks. Use the Data Mining tab for full-lifecycle data mining, and build models which can be exported to a production server. Visualize your models in Visio.

SQL Server 2012 Data Mining

Microsoft expert Rafal Lukawiecki provides free and paid videos on data mining for SQL Server 2012 at Project Botticelli. The website has other Microsoft BI topics too from leading Microsoft experts.

SQL Server DM with Excel 2010 and PowerPivot

Microsoft MVP Mark Tabladillo shows you how to unleash SQL Server 2008 Data Mining with Excel 2010 and SQL Server PowerPivot for Excel, Microsoft s new self-service BI offering.

Predixion Software Offers Third-Party Tools for SQL Server Data Mining

Our friends at Predixion Software have released Predixion Insight, their predictive analytics offering that builds on the SQL Server Data Mining platform. They have added some cool visualization and collaboration features that are surfaced in Excel Services as well.

Disclaimer: SQLServerDataMining.com is currently managed by members of the SQL Server Data Mining development team at Microsoft Corporation. It does not represent Microsoft’s official position on its products or technologies. All content is provided “AS-IS” with no warranties and confers no rights.

Microsoft SQL Server Hosting Management

Microsoft® SQL Server® hosting provides best-in-class support that saves time money. We also offer extra DBA services on time and materials basis for those customers who need even more SQL expertise, such as database evaluation and improvement services. We strongly suggest you to check out our supporters at best web hosting in the UK .

Removing Rows with the DELETE Statement in SQL Server #sql #server, #delete #statement, #tsql, #t-sql


T-SQL Programming Part 8 – Removing Rows with the DELETE Statement in SQL Server

Greg Larsen explores how to remove rows from a SQL Server table using the DELETE statement.

In my last two articles I discuss using the INSERT and UPDATE statement. These two commands added new rows and modified existing rows. In this article I will explore how to remove rows from a table using the DELETE statement.

Syntax of the DELETE Statement

You may want to delete rows because they are no longer needed, or they were incorrectly added in the first place. The DELETE statement is used to remove rows from a SQL Server table. A single DELETE statement can remove a single row, or number of rows. Here is the basic syntax of the DELETE statement.

( expression ) – is a number or an expression that equates to a number used to limit the number of rows deleted

object – is the name of an object in a database from which you want to delete records

OUTPUT Clause – identifies the column values of the deleted rows to be returned from the DELETE statement

search_condition – the condition used to identify the rows to be deleted

For the complete syntax of the DELETE statement refer to Books Online.

In order to demonstrate how to use the DELETE statement I will be creating a DemoDelete table. Here is the code I used to create and populate my DemoDelete table.

Deleting a Single Row Using WHERE Constraint

In order to delete a single row from a table you need to identify that row with a WHERE constraint. Below is some code that deletes a single row from my DemoDelete table:

In this code I used the DeleteDesc column to constrain the records that I would be deleting. By specifying that the DeleteDesc column value had to be equal to the value “The Mother”, only one record in my table got deleted, because only one row in my table had that value. Now if my table contained a number of rows that had a column value of “The Mother” then all the rows that contained that value would be deleted.

If you are unsure of the rows you are identifying to be deleted using the above example, and you want to make sure the rows you have targeted with the WHERE constraint are correct, then you can first run a SELECT statement. After you are confident that your SELECT statement is selecting the rows you want to delete you can then convert it to a DELETE statement.

Using the TOP Clause to Delete a Single Row

You can also use the TOP clause to delete a single row. Below is an example where I used the TOP clause to delete one row from my DemoDelete table:

This statement deleted a random row from my DemoDelete table. It was random because SQL Server does not guarantee a sorted set will be returned where it can delete the top record of the ordered set. When I review the records left in my table I see I deleted the record that had an Id value of 1 and a DeleteDesc of “Thing One”. Note if I change the TOP clause to another number like 3, then this statement would delete the number of rows equal to the value specified.

Deleting the TOP 1 Records from a Sorted Set

If you want to delete the first record from a sorted set you need to write your TSQL DELETE statement similar to the following code:

In the above code I create a subquery that returned a single ID value based on the descending sort order of ID column value in my DemoDelete table. I then used the WHERE constraint to only delete records that had that ID value. I also place a TOP (1) clause on my DELETE statement to only delete a single row should my DemoDelete table contain multiple records with the same ID value. If you are following along you can see the above code deleted the DemoDelete record that had an ID value of 7.

Since my DemoDelete table did not contain multiple records with the same ID value I could have also deleted the largest ID value row by running the following code:

When I run this code against my DemoDelete table it will delete ID value of 5.

Using Another Table to Identify the Rows to Delete and the OUTPUT Clause

There are times when you might what to delete the rows in a table based on values from another table. An example of where you might want to do this is to remove rows from your inventory table based on some sales data. To demo this first I will need to generate another table that contains key values for the rows I want to delete. Here is the code to create and populate my other table:

At this point after running all my different DELETE statements against my DemoDelete table there are only three rows left in my table. By selecting all the rows in my DemoDelete table I see that these three rows are left:

In order to use the RecordsToDelete table to delete specific records in my DemoDelete table I need to run the code below.

This code joins the table DemoDelete and RecordsToDelete based on the DeleteDesc column. When the DeleteDesc matches between the two tables the matched rows within the DemoDelete table are deleted.

My delete statement above also contains the OUTPUT clause. The OUTPUT clause is used to return the column values of the deleted rows that are identified in the OUTPUT clause. In the code above I specified “DELETED.*”. The “*” means to return all the columns values from the DELETED rows. When I ran this code the following rows were returned:

These returned rows could be used by your application for some purpose, like creating an audit trail.

Inserting OUTPUT Clause Data into a Table

There are times when you might retain the data created by the OUTPUT clause in a table instead of just returning the deleted row values to the application. To demonstrate running a DELETE statement that populates the row values being deleted into a table I will run the code below.

The following output displayed the SELECT statement in the above code snippet:

In both of my examples that used the OUTPUT clause of the DELETE statement I specified “DELETED.*” to denote outputting all the column values for the rows being deleted. I could have specified the actual column values I wanted to output. The code below is equivalent to the code above.

In this code you can see I specified “DELETED.ID, DELETED.DeleteDesc”, instead of “DELETE.*”. You can verify this code is equivalent by inserting the “The Cat” row back into the DemoDelete table and then running the code above.

Multiple Ways to Delete Rows

As you can see there are multiple ways to delete rows from a SQL Server table. You can use the WHERE clause to identify specific criteria for the rows that need to be deleted. You can join a table to the table in which you are deleting rows to identify which rows to delete. You can even use the TOP clause to restrict the number of rows that will be deleted. The article should help you with developing your DELETE statement next time you have to remove some rows from a SQL Server table.

AIM SOFTWARE #general #ledger, #financial #management, #aged #care #funding, #aged #care #accounting, #sql #accounting #software, #sequel #accounting #solutions, #integrated #accounting #software, #payroll #& #rosters, #accounts #receivables, #accounts #billing, #bonds #register, #trust #management, #payroll #bureau, #asset #register, #community #care, #general #ledger, #accounts #payable, #hostels #for #aged, #ilu’s, #residential #aged #care, #aged #care #funding, #aged #care #management #services, #aged #care #e-business #australia, #electronic #claims #aged #care #australia, #hacc #software #australia, #financial #management #software #aged #care, #accounting #software, #integrated #software, #aged #care, #payroll #and #people #management #software #and #services, #online #claiming, #medicare, #aged #care #ebusiness, #aged #online #claiming, #acfi, #aged #care #funding #instrument, #aged #care #resident #funding, #sql, #sequel, #sql #database, #financial #management #software, #aged #care #accounting #software #hr #software, #human #resources #software



AIM Software has the expertise and commitment to provide cost effective aged care specific financial management solutions you can rely on.

AIM Software comprises a fully integrated suite of modules that can be purchased as a whole, in combinations or individually, with the complete assurance that additional modules can be integrated into a full enterprise software package at a later stage.

Don’t work harder – work SMARTER with


  • Outsourced Solutions
    Outsourced Financial Management Services (OFMS) from AIM Software maybe the answer.
  • Software Solutions
    Fully integrated, suite of modules to meet your needs. Windows based tools tried and tested since 1994.
  • Support Solutions
    A comprehensive range of professional multi level support services are available to all our clients.

Latest News

Managing SQL Server Services with #Powershell #managing #sql #server


Managing SQL Server Services with #Powershell

Managing service accounts is one of those tedious jobs that tend to annoy me. Changing the passwords of these accounts regularly is a good security practice, but takes a lot of time and can be difficult to manage. In other words, a perfect task to automate with Powershell.

There are two ways to handle this task, both through the Windows Management Instrumentation(WMI). The first way uses the base WMI interface, which can be used to manage all Windows services. Using it is a little convoluted, but gets the job done:

This call is easy to decipher. Using the .Change() method of the service class, we can update the service account name and/or password (as well as other properties of the service). You probably noticed the number of arguments the .Change() method takes, which makes it cumbersome to use. The other gotcha is that the service still needs to be restarted in order for these changes to take affect. Depending on your need, these gotchas can be good or bad, but can be handled depending on how you code around it.

If you’ve managed services through the GUI, using this method probably makes you think of how you manage accounts through the services MMC console. However, most SQL Server folks will use the SQL Server Configuration console instead. These two methods are subtly different, where using the SQL Server Configuration console will handle some additional tasks (such as restarting the service) as part of its interface. If we want to manage our SQL Services in the same fashion, we can leverage a part of the SMO, the Wmi.ManagedComputer Wmi.Service classes.

To handle our services, we need an extra step or two, but it’s a little cleaner to write:

We first need to load the SqlWmiManagement assembly, just like loading the SMO libraries if we were using that functionality(note: this library is loaded if you load the SQLPS module). Then we need to instantiate the Managed computer object and retrieve the specific service we want to alter. The final step is to just change the service account.

This works about the same as the base WMI approach, though we’re altering the service by using the same functionality as the SQL Server Configuration Manager. This means that once we change the service account, it will force a service restart. This is good and bad. The good is that it will apply the change immediately and you will know right away if the account change is valid. The bad is that you can not delay the service restart, so if you use this method you want to be sure it is a good time to restart your SQL Service.

I have built a function around using the second method that makes handling this process a little easier. Also, because I’m not a fan of passing passwords in plain text, I built the function to take a PSCredential object to keep my account information secure. In order to spare you the wall of text, you can view the full function on my GitHub repository.

The function can be loaded through a variety of methods, but once it is loaded calling it is simply a matter of creating the credential for the service account and calling the function:

Creating functions allows us to abstract some of the messy bits and make our lives a little easier. In this case, my function handles the following:

  • Decoding the credential in a way to keep the account information secure.
  • Managing the service names based on the instance name (passed in the standard HOST\INSTANCE name format).
  • Restarting the SQL Server Agent service if it is not running after up restart the SQL Server service.
  • Accept a list of instances and process all of them.

This simplifies the changing of account information and gives us many opportunities for automating large scale password changes. For example, if you use a single service account for all your instances, changing it is a snap:

This simple pattern and function will not only make managing our security policy easier, but also more consistent. Using a simple list of servers from a text file, a database table, or even our Central Management Server and combining it with this function means we ensure that we are applying these changes to every server in the list. This is how we can build for automation, focusing on making simple tasks like this repeatable and consistent.

Quick hat tips out to the following folks:

Share and Enjoy:

Enable remote connections for SQL Server Express 2012 – Stack Overflow #sql #server #allow #remote #connection


I just installed SQL Server Express 2012 on my home server. I’m trying to connect to it from Visual Studio 2012 from my desktop PC, and repeatedly getting the well-known error:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server)

What I’ve done to try to fix this:

  • Run SQL Server Configuration Manager on the server and enable SQL Server Browser
  • Add a Windows Firewall exception on the server for TCP, ports 1433 and 1434 on the local subnet.
  • Verify that I have a login on the SQL Server instance for the user I’m logged in as on the desktop.
  • Verify that I’m using Windows Authentication on the SQL Server instance.
  • Repeatedly restart SQL Server and the whole dang server.
  • Pull all my hair out.

How can I get SQL Server 2012 Express to allow remote connections!?

asked Jun 30 ’12 at 22:21

In case it helps anyone else. this helped, but I still wasn t able to connect until I started the SQL Server Browser service. (Note: I had to go into the Windows Services application to do this, because the SQL Server Browser service s startup type was Disabled for some reason. Changed the Startup Type to Automatic , started the service, and was then able to connect.) mercurial Dec 20 ’12 at 15:35

On Windows 8 and SQL 2012 Express SP1 installed to SQLEXPRESS instance I had to set dynamic ports to anything other than blank (if you deleted it, set to 0 then it will re-calculate a new random port for you) AND also open BOTH TCP 1433 and UDP 1434 incoming port rules in the Advanced Firewall control panel. When dynamic ports was blank the SQL Server just hung on start-up. Code Chief Mar 30 ’13 at 0:54

Kyralessa provides great information but I have one other thing to add where I was stumped even after this article.

Under SQL Server Network Configuration > Protocols for Server > TCP/IP Enabled. Right Click TCP/IP and choose properties. Under the IP Addresses you need to set Enabled to Yes for each connection type that you are using.

I had a different problem from what all of the answers so far mentioned!

I should start off by saying that I had it in Visual Studio, and not SQL Server Express but the solution should be exactly the same.

Well, god, it’s actually really simple and maybe a bit foolish. When I tried to create a database and Visual Studio suggested the name of the SQL Server it gave me my Windows username and since it’s actually the name of the server I went for it.

In reality it actually was my Windows username + \SQLEXPRESS. If you didn’t change any settings this is probably yours too. If it works, stop reading; this is my answer. If it doesn’t work maybe the name is different.

If, like me, you only had this problem in Visual Studio to check what is yours follow these steps:

  1. Open SQL Server Management Studio icon .
  2. If you don’t see your server (docked to the left by default) press F8 or go to View -> Object Explorer .
  3. Right click on the name of the server and choose Properties (The last item)
  4. At the bottom left you can see your server’s actual name under “Server ” (not Connection, but above it).

This is the name of the server and this is what you should attempt to connect to! not what Visual Studio suggests!

answered Jan 24 ’13 at 12:20

I had the same issue with SQL Server 2014 locally installed named instance. Connecting using the FQDN\InstanceName would fail, while connecting using only my hostname\InstanceName worked. For example: connecting using mycomputername\sql2014 worked, but using mycomputername.mydomain.org\sql2014 did not. DNS resolved correctly, TCP/IP was enabled within SQL Configuration Manager, Windows Firewall rules added (and then turned the firewall off for testing to ensure it wasn’t blocking anything), but none of those fixed the problem.

Finally, I had to start the “SQL Server Browser ” service on the SQL Server and that fixed the connectivity issue.

I had never realized that the SQL Server Browser service actually assisted the SQL Server in making connections; I was under the impression that it simply helped populate the dropdowns when you clicked “browse for more” servers to connect to, but it actually helps align client requests with the correct port # to use, if the port # is not explicitly assigned (similar to how website bindings help alleviate the same issue on an IIS web server that hosts multiple websites).

  • when you use wstst05\sqlexpress as a server name, the client code separates the machine name from the instance name and the wstst05 is compared against the netbios name. I see no problem for them to match and the connection is considered local. From there, we retrieve the needed information WITHOUT contacting SQL Browser and connect to the SQL instance via Shared Memory without any problem.
  • when you use wstst05.capatest.local\sqlexpress, the client code fails the comparison of the name (wstst05.capatest.local) to the netbios name (wstst05) and considers the connection “remote”. This is by design and we will definitely consider improving this in the future. Anyway, due to considering the connection remote and the fact that it is a named instance, client decides that it needs to use SQLBrowser for name resolution. It attempts to contact SQL Browser on wstst05.capatest.local (UDP port 1434) and apparently that part fails. Hence the error you get.

From the “Using SQL Server Browser” section:

If the SQL Server Browser service is not running, you are still able to connect to SQL Server if you provide the correct port number or named pipe. For instance, you can connect to the default instance of SQL Server with TCP/IP if it is running on port 1433. However, if the SQL Server Browser service is not running, the following connections do not work.

  • Any component that tries to connect to a named instance without fully specifying all the parameters (such as the TCP/IP port or named pipe) .
  • Any component that generates or passes server\instance information that could later be used by other components to reconnect.
  • Connecting to a named instance without providing the port number or pipe.
  • DAC to a named instance or the default instance if not using TCP/IP port 1433.
  • The OLAP redirector service.
  • Enumerating servers in SQL Server Management Studio, Enterprise Manager, or Query Analyzer.

If you are using SQL Server in a client-server scenario (for example, when your application is accessing SQL Server across a network), if you stop or disable the SQL Server Browser service, you must assign a specific port number to each instance and write your client application code to always use that port number. This approach has the following problems.

  • You must update and maintain client application code to ensure it is connecting to the proper port.
  • The port you choose for each instance may be used by another service or application on the server, causing the instance of SQL Server to be unavailable.

And more info from the same article from the “How SQL Server Browser Works” section:

Because only one instance of SQL Server can use a port or pipe. different port numbers and pipe names are assigned for named instances, including SQL Server Express. By default, when enabled, both named instances and SQL Server Express are configured to use dynamic ports, that is, an available port is assigned when SQL Server starts. If you want, a specific port can be assigned to an instance of SQL Server. When connecting, clients can specify a specific port; but if the port is dynamically assigned, the port number can change anytime SQL Server is restarted, so the correct port number is unknown to the client. When SQL Server clients request SQL Server resources, the client network library sends a UDP message to the server using port 1434. SQL Server Browser responds with the TCP/IP port or named pipe of the requested instance. The network library on the client application then completes the connection by sending a request to the server using the port or named pipe of the desired instance

answered Oct 20 ’15 at 18:08

How to check for last SQL Server backup #backup #script #sql #server


How to check for last SQL Server backup

As a database professional, I get asked to review the health of database environments very often. When I perform these reviews, one of the many checks I perform is reviewing backup history and making sure that the backup plans in place meet the requirements and service level agreements for the business. I have found a number of backup strategies implemented using full, differential and transaction log backups in some fashion.

In more cases then I would like to share, I have found business critical databases that are not being backed up properly. This could be in the worst case having no backups or a backup strategy that does not meet the recoverability requirement of the business.

When doing an initial check I gather many details about the environment. Regarding backups, I capture things such as recovery model, last full backup, last differential, and the last two transaction log backups. Having this information will allow me to determine what the backup strategy is and point out any recover-ability gaps.

Some examples I have found are 1) no backup’s period, 2) full backup from months ago and daily differentials. In this case the full had been purged from the system, 3) Full backup of user database in Full recovery mode with no transaction log backups, 4) Proper use of weekly full, daily differential, and schedule transaction log backups – however the schedule was set to hourly and the customer expected they would have no more than 15 minutes of data loss. I am happy to report that I do find proper backup routines that meet the customers’ service level agreement too.

The code I like to use for this check is below.

Ensuring that you have backups is crucial to any check of a SQL Server instance. In addition to ensuring that backups are being created, validation of those backups is just as important. Backups are only valid if you can restore them.

When I have the opportunity to share my experiences of backup and recovery with people I always like to share about how to backup the tail end of a transaction log and how to attach a transaction log from one database to another in order to backup the tail end of the log. I have created a couple of videos on how to accomplish this that you can view using this like http://www.timradney.com/taillogrestore

20 Responses to How to check for last SQL Server backup

Tim, I always like to add a disclaimer that just because the history is there doesn t mean the file is I ve seen times when they got cleaned up too soon! Another edge case is having deadlocks that prevent the history record being added, making it look like the backup didn t happen. I like the premise of checking/confirming backups match what they expect.

Very good points.

Andrew Alger says:

Great script, thanks for sharing!
To take this one step further, I created a scheduled script that checks and alerts me if my backups have not been run within a set time. Server updates and restarts always seem to take place during my backup windows.

Also, enough cannot be said for validating backups. My sys admin was running nightly backups that messed up my backup chain and I had no idea until I began validating these. No need to say what would have happened had I needed to perform an actual restore

Kevin M Parks says:

care to share you scheduled script?

Hi Andrew,
Can you please share the scheduled script that checks and alerts with me?

Thanks for sharing your thoughts and your script.

I am curiuos as to what questions you would ask to determine what the SLA should be (or more specifically what point people want to be able to recover to)

In my experience I quite often find full recovery models with full, differential and transaction log backups in place for systems that I feel simply do not need that level of backup.

For instance I found a system backup scheduled to be restored daily onto a seperate database on a reporting server. This then had full backups with log file backups running on the reporting server.

In other instances, quite often sytems are able to dynamically recreate the data in an instant, but simply use the database as a convenience. In such cases, I would set the recovery model to simple and have no backups run. I then use a powershell script to shutdown the (Vsphere) server at night and simply backup the entire server in a shutdown state. Since it is in a shutdown state, the data file is backed up without a problem and since the SLA is content with falling back one day (which many production systems I have are) this seems to be the quickest model of recovery without any fuss or hunting for a script and backup files.

Great question. For me I typically ask two very simple questions. 1) How much data can you afford to lose 2) How long can your system be down. The typical response is none and none and then start the explanations and negotiations. I have systems like you mentioned where a full from the previous night is sufficient. Your scenario of using a file backup such as shutting down the service and backing up all the files meets your SLA. For a Reporting server like you mentioned that is just a restored copy from production on a daily basis then why backup at all? For organizations I support, we document the SLA (RPO and RTO) of each database and work to meet that.

Many times working with different lines of business requires explaining to the business how backups work and what are industry standards and what is realistic. When they don t want to hear that a 15 minute RPO is best then present them with the price tag to lower the RPO. It really boils down to numbers and dollars in some cases.

Awesome query! I ve been looking for something like this for awhile. If you are interested, I modified it a bit for my use and turned it into to a sProc that uses Dynamic SQL to check all my Servers and Instances. I m making myself a dashboard with this. I d like to share it with you.

I still have one more piece I d like to add involving xp_fileexist to complete my dashboard project. I hope to have solved that soon.

Thanks again for taking the time to share this with all of us! Outstanding job!

Photo Gallery: 8 Great New Features in SQL Server 2016 #always #encrypted, #in-memory #oltp, #alwayson, #ssdt, #pdw, #polybase, #real-time #analytics, #json, #sql #server


SQL Select: 8 Great New Features in SQL Server 2016

1. Always Encrypted

Always Encrypted is designed to protect data at rest or in motion. With Always Encrypted, SQL Server can perform operations on encrypted data and the encryption key can reside with the application. Encryption and decryption of data happens transparently inside the application. This means the data stored in SQL Server will be encrypted which can secure it from DBA and administrators but that also has considerations for ad-hoc queries, reporting and exporting the data.

2. Stretch Database

The idea behind this feature is certainly interesting. The upcoming stretch database feature will allow you to dynamically stretch your on-premise database to Azure. This would enable your frequently accessed or hot data to stay on-premise and your infrequently accessed cold data to be moved to the cloud. This could enable you to take advantage of low cost Azure store and still have high performance applications. However, this is one trick where Microsoft really needs to get the partitioning right to keep your queries from straying into the cloud and killing your performance.

3. Real-time Operational Analytics

This feature uses the dynamic duo of SQL Server s in-memory technologies; it combines In-Memory OLTP with the in-memory columnstore for real-time operational analytics. Its purpose is to tune your system for optimal transactional performance as well as increase workload concurrency. This sounds like a great combination and applying analytics to your system s performance is something a lot of customers have asked for a long time but you will certainly need to have the memory to take advantage of it.

4. PolyBase into SQL Server

Big Data continues to grow in strategic importance but unless you had the SQL Server Parallel Data Warehouse (PDW) connecting SQL Server to Dig Data and Hadoop in particular was limited and difficult. In previous releases, PDW was the only version of SQL Server that came with PolyBase a technology that bridged SQL Server and Hadoop by enabling you to construct and run SQL queries over Hadoop data stores eliminating the need to understand HDFS or MapReduce. SQL Server 2016 promises to bring the PolyBase technology mainstream into the primary SQL Server SKUs (probably the Enterprise edition).

5. Native JSON Support

JSON (JavaScript Object Notation) is a standardized data exchange format that is currently not supported natively by SQL Server. To perform JSON imports and exports you need to hand-code complex T-SQL, SQLCLR or JavaScript. SQL Server 2016 promises to simply this by incorporating JSON support directly into SQL Server much like XML. SQL Server 2016 will natively parse and store JSON as relational data and will support exporting relational data to JSON.

6. Enhancements to AlwaysOn

SQL Server 2016 will also continue to advance high availability and disaster recovery with several enhancements to AlwaysOn. The upcoming SQL Server 2016 release will enhance AlwaysOn with the ability to have up to three synchronous replicas. Additionally, it will include DTC (Distributed Transaction Coordinator) support as well as support for round-robin load balancing of the secondaries replicas. There will also be support for automatic failover based on database health.

7. Enhanced In-Memory OLTP

First introduced with SQL Server 2014, In-Memory OLTP will continue to mature in SQL Server 2016. Microsoft will enhance In-Memory OLTP by extending the functionality to more applications while also enhancing concurrency. This means they will be expanding the T-SQL surface area, increasing the total amount of memory supported into the terabyte range as well as supporting a greater number of parallel CPUs.

8. Revamped SQL Server Data Tools

Another welcome change in SQL Server 2016 is the reconsolidation of SQL Server Data Tools (SSDT). As Microsoft worked to supplant the popular and useful Business Development Studio (BIDS) with SQL Server Data Tools they wound up confusing almost everyone by creating not one but two versions of SQL Server Data Tools both of which needed to be downloaded separately from installing SQL Server itself. With the SQL Server 2016 release Microsoft has indicated that they intend to reconsolidate SQL Server Data Tools.

Related Galleries / Refresh Gallery

  • At this year s inaugural Ignite Conference in held in Chicago Microsoft announced that the next release of SQL Server, previously referred to as SQL Server vNext, will officially be SQL Server 2016. There s no doubt that SQL Server has been on a fast track release program and the upcoming SQL Server 2016 release will be just two short years after the last SQL Server 2014 release. For business critical enterprise software this is a torrid release cycle that many businesses will have trouble keeping up with. But Microsoft fully intends to make the SQL Server 2016 release worth getting. You can find out more about the upcoming SQL Server 2016 features at the SQL Server 2016 Preview page . and the SQL Server Blog . You might also check out the Ignite session SQL Server Evolution on Channel 9 .

    Here are eight great features to look for in SQL Server 2016.

  • HAVING (clause) fun with SAS Enterprise Guide – The SAS Dummy #example #of #having #clause #in #sql




    HAVING (clause) fun with SAS Enterprise Guide

    Last week I attended a meeting of the Toronto Area SAS Society. (Okay, I didn’t just attend; I was a presenter as well .) This user group meeting contained a feature that I had never seen before: “Solutions to the Posed Problem”.

    Weeks before the meeting, an “open problem” was posted to the TASS community group site. The problem included 134 records of time series data and an objective: create a data set that contains just the most recent record (according to the date and time values) for each unique identifier in the set.

    Here’s a snapshot of a data sample, with the desired output records highlighted.

    It’s a simple little puzzle similar to those that SAS programmers must deal with every day. But the instructions for this particular problem were to “use an interface product” to solve it. The implication was “use SAS Enterprise Guide”, but the entrants got creative with that requirement. Some used SAS Enterprise Guide to import the data (which was provided as an Excel file), but then wrote a program to do the rest. Some simply opened a program node and wrote code to do the whole shebang.

    Art Tabachneck, one of the TASS leaders and a frequent SAS-L contributor, tried to use just “point-and-click” to solve the problem without resorting to a program node, but he didn’t manage it. He knew that the Query Builder was the best chance for an SQL solution, and that he would need a HAVING clause to make it work. He was on the right track! But in the limited time that he had to devote to the problem, he couldn’t bend the Query Builder to his will. In the end, Art wound up working within the program node, just like most other participants.

    When I returned home, I made it my mission to devise a “pure” task-based solution. And here it is:

    Step 1: Import the Excel file
    That’s easy. Even as Art observed, you simply select File- Import Data, point to the XLS file on your local machine, and then click Finish when the wizard appears. The default behavior produces a perfect SAS data set.

    Step 2: Start with the query
    With the output from the import step, select the Query Builder task. We need the all of the columns represented, so drag all columns over to the Select Data tab.

    Step 3: Create a computed column for a combined date-time value
    All of the successful solutions did this step somehow. I borrowed from the most elegant of these and created an “Advanced expression”-based column named “Test_DateTime” as:

    Step 4: Create a MAX aggregation version of that computed column
    Create another column based on Test_DateTime column, and this time apply the MAX summarization. Name the new column MAX_of_Test_DateTime.

    This is the trick! As soon as you have an aggregated measure as part of the query, the Filter tab will divide into two sections, revealing a “Filter the summarized data” section. That’s the piece that allows you to generate a “Having” structure.

    Step 5: Create a Having filter based on the summarized column
    The filter is effectively:

    The Query Builder generates more verbose SQL than the above, but you get the idea. Here’s a screen shot that shows the Having filter in place.

    Step 6: Change the Grouping to include just SAMPLE_ID
    When you have a summarized measure in the query the Query Builder provides control over the grouping behavior. By default, the query is “grouped by” all non-summarized columns. But for this query, we want to group only by each value of the Sample_ID column. On the Select Data tab, uncheck “Automatically select groups”. Then click the Edit Groups button and exclude all except for t1.Sample_ID.

    Here’s the Select Data tab with the proper grouping in place:

    When you run the query, you should get the desired result. Here’s the SQL that was generated:

    SQL: Crear la base de datos #sql, #dbms, #modello #relazionale, #relazione, #database, #foreign #key, #publication, #author, #person, #publisher, #institution, #thesis, #jdbc, #odbc, #esql, #code, #default, #postgresql, #gratis, #gratuito, #free, #corso, #guida, #elemento, #creazione, #


    Crear la base de datos

    Una base de datos en un sistema relacional estпїЅ compuesta por un conjunto de tablas, que corresponden a las relaciones del modelo relacional. En la terminologпїЅa usada en SQL no se alude a las relaciones, del mismo modo que no se usa el tпїЅrmino atributo, pero sпїЅ la palabra columna, y no se habla de tupla, sino de lпїЅnea. A continuaciпїЅn se usarпїЅn indistintamente ambas terminologпїЅas, por lo que tabla estarпїЅ en lugar de relaciпїЅn, columna en el de atributo y lпїЅnea en el de tupla, y viceversa.
    PrпїЅcticamente, la creaciпїЅn de la base de datos consiste en la creaciпїЅn de las tablas que la componen. En realidad, antes de poder proceder a la creaciпїЅn de las tablas, normalmente hay que crear la base de datos, lo que a menudo significa definir un espacio de nombres separado para cada conjunto de tablas. De esta manera, para una DBMS se pueden gestionar diferentes bases de datos independientes al mismo tiempo sin que se den conflictos con los nombres que se usan en cada una de ellas. El sistema previsto por el estпїЅndar para crear los espacios separados de nombres consiste en usar las instrucciones SQL “CREATE SCHEMA”. A menudo, dicho sistema no se usa (o por lo menos no con los fines y el significado previstos por el estпїЅndar), pero cada DBMS prevпїЅ un procedimiento propietario para crear una base de datos. Normalmente, se amplпїЅa el lenguaje SQL introduciendo una instrucciпїЅn no prevista en el estпїЅndar: “CREATE DATABASE”.
    La sintaxis empleada por PostgreSQL, pero tambiпїЅn por las DBMS mпїЅs difundidas, es la siguiente:

    CREATE DATABASE nombre_base de datos

    Con PostgreSQL estпїЅ a disposiciпїЅn una orden invocable por shell Unix (o por shell del sistema usado), que ejecuta la misma operaciпїЅn:

    createdb nombre_base de datos

    Para crear nuestra base de datos bibliogrпїЅfica, usaremos pues la orden:

    Una vez creada la base de datos, se pueden crear las tablas que la componen. La instrucciпїЅn SQL propuesta para este fin es:

    CREATE TABLE nombre_tabla (
    nombre_columna tipo_columna [ clпїЅusula_defecto ] [ vпїЅnculos_de_columna ]
    [. nombre_columna tipo_columna [ clпїЅusula_defecto ] [ vпїЅnculos_de_columna ]. ]
    [. [ vпїЅnculo_de tabla]. ] )

    nombre_columna. es el nombre de la columna que compone la tabla. SerпїЅa mejor no exagerar con la longitud de los identificadores de columna, puesto que SQL Entry Level prevпїЅ nombres con no mпїЅs de 18 caracteres. ConsпїЅltese, de todos modos, la documentaciпїЅn de la base de datos especпїЅfica. Los nombres tienen que comenzar con un carпїЅcter alfabпїЅtico.

    tipo_columna. es la indicaciпїЅn del tipo de dato que la columna podrпїЅ contener. Los principales tipos previstos por el estпїЅndar SQL son:

    • CHARACTER(n)
      Una cadena de longitud fija con exactamente n caracteres. CHARACTER se puede abreviar con CHAR
    Una cadena de longitud variable con un mпїЅximo de n caracteres. CHARACTER VARYING se puede abreviar con VARCHAR o CHAR VARYING.

    Un nпїЅmero estero con signo. Se puede abreviar con INT. La precisiпїЅn, es decir el tamaпїЅo del nпїЅmero entero que se puede memorizar en una columna de este tipo, depende de la implementaciпїЅn de la DBMS en cuestiпїЅn.

    Un nпїЅmero entero con signo y una precisiпїЅn que no sea superior a INTEGER.

  • FLOAT(p)
    Un nпїЅmero con coma mпїЅvil y una precisiпїЅn p. El valor mпїЅximo de p depende de la implementaciпїЅn de la DBMS. Se puede usar FLOAT sin indicar la precisiпїЅn, empleando, por tanto, la precisiпїЅn por defecto, tambiпїЅn пїЅsta dependiente de la implementaciпїЅn. REAL y DOUBLE PRECISION son sinпїЅnimo para un FLOAT con precisiпїЅn concreta. TambiпїЅn en este caso, las precisiones dependen de la implementaciпїЅn, siempre que la precisiпїЅn del primero no sea superior a la del segundo.

  • DECIMAL(p,q)
    Un nпїЅmero con coma fija de por lo menos p cifras y signo, con q cifras despuпїЅs de la coma. DEC es la abreviatura de DECIMAL. DECIMAL(p) es una abreviatura de DECIMAL(p,0). El valor mпїЅximo de p depende de la implementaciпїЅn.

    Un periodo de tiempo (aпїЅos, meses, dпїЅas, horas, minutos, segundos y fracciones de segundo).

    Un instante temporal preciso. DATE permite indicar el aпїЅo, el mes y el dпїЅa. Con TIME se pueden especificar la hora, los minutos y los segundos. TIMESTAMP es la combinaciпїЅn de los dos anteriores. Los segundos son un nпїЅmero con coma, lo que permite especificar tambiпїЅn fracciones de segundo.

  • clпїЅusula_defecto. indica el valor de defecto que tomarпїЅ la columna si no se le asigna uno explпїЅcitamente en el momento en que se crea la lпїЅnea. La sintaxis que hay que usar es la siguiente:

    donde valor es un valor vпїЅlido para el tipo con el que la columna se ha definido.

    vпїЅnculos_de_columna. son vпїЅnculos de integridad que se aplican a cada atributo concreto. Son:

    • NOT NULL, que indica que la columna no puede tomar el valor NULL.
    • PRIMARY KEY, que indica que la columna es la llave primaria de la tabla.
    • una definiciпїЅn de referencia con la que se indica que la columna es una llave externa hacia la tabla y los campos indicados en la definiciпїЅn. La sintaxis es la siguiente:

    Las clпїЅusulas ON DELETE y ON UPDATE indican quпїЅ acciпїЅn hay que ejecutar en el caso en que una tupla en la tabla referenciada sea eliminada o actualizada. De hecho, en dichos casos en la columna referenciante (que es la que se estпїЅ definiendo) podrпїЅa haber valores inconsistentes. Las acciones pueden ser:

    • CASCADE: eliminar la tupla que contiene la columna referenciante (en el caso de ON DELETE) o tambiпїЅn actualizar la columna referenciante (en el caso de ON UPDATE).
    • SET DEFAULT: asignar a la columna referenziante su valor de defecto.
    • SET NULL: asignar a la columna referenciante el valor NULL.
  • un control de valor, con el que se permite o no asignar un valor a la columna en funciпїЅn del resultado de una expresiпїЅn. La sintaxis que se usa es:

    donde expresiпїЅn_condicional es una expresiпїЅn que ofrece verdadero o falso.
    Por ejemplo, si estamos definiendo la columna COLUMNA1, con el siguiente control:

    CHECK ( COLUMNA1 1000 )

    en dicha columna se podrпїЅn incluir sпїЅlo valores inferiores a 1000.

  • vпїЅnculo_de_tabla. son vпїЅnculos de integridad que se pueden referir a mпїЅs columnas de la tabla. Son:

    • la definiciпїЅn de la llave primaria:

    PRIMARY KEY ( columna1 [. columna2. ] ) VпїЅase que en este caso, a diferencia de la definiciпїЅn de la llave primaria como vпїЅnculo de columna, пїЅsta se puede formar con mas de un atributo.

  • las definiciones de las llaves externas:

    FOREIGN KEY ( columna1 [. columna2. ] ) definiciones_de_referencia

    La definiciпїЅn_de_referencia tiene la misma sintaxis y significado que la que puede aparecer como vпїЅnculo de columna.

  • un control de valor, con la misma sintaxis y significado que el que se puede usar como vпїЅnculo de columna.

  • Para aclarar mejor el uso de la instrucciпїЅn CREATE TABLE, veamos algunas пїЅrdenes que implementan la base de datos bibliogrпїЅfica ejemplificada.

    CREATE TABLE Publication (
    type CHAR(18) NOT NULL

    La instrucciпїЅn anterior crea la tabla Publication, formada por las dos columna ID de tipo INTEGER, y type de tipo CHAR(18). ID es la llave primaria de la relaciпїЅn. En el atributo type hay un vпїЅnculo de no nulidad.

    title VARCHAR(160) NOT NULL,
    publisher INTEGER NOT NULL REFERENCES Publisher(ID),
    volume VARCHAR(16),
    series VARCHAR(160),
    edition VARCHAR(16),
    pub_month CHAR(3),
    pub_year INTEGER NOT NULL,
    note VARCHAR(255)

    Crea la relaciпїЅn Book, formada por nueve atributos. La llave primaria es el atributo ID, que es tambiпїЅn una llave externa hacia la relaciпїЅn Publication. Sobre los atributos title, publisher y pub_year hay vпїЅnculos de no nulidad. AdemпїЅs, el atributo publisher es una llave externa hacia la tabla Publisher.

    CREATE TABLE Author (
    publicationID INTEGER REFERENCES Publication(ID),
    PRIMARY KEY (publicationID, personID)

    Crea la relaciпїЅn Author, compuesta por dos atributos: publicationID y personID. La llave primaria en este caso estпїЅ formada por la combinaciпїЅn de los dos atributos, como estпїЅ indicado por el vпїЅnculo de tabla PRIMARY KEY. PublicationID es una llave externa hacia la relaciпїЅn Publication, mientras que personID lo es hacia la relaciпїЅn Person.

    El archivo create_biblio.sql contiene todas las пїЅrdenes necesarias para crear la estructura de la base de datos bibliogrпїЅfica ejemplificada.

    En PotgreSQL, por lo menos hasta la versiпїЅn 6.5.1, no se han implementado todavпїЅa los vпїЅnculos sobre las llaves externas. El parser acepta, de todos modos, las sintaxis SQL que le afectan, y por tanto los constructos FOREIGN KEY y REFERENCES no producen un error, sino sпїЅlo un warning.

    Java code for connecting MS SQL Server by using SQL Server Authentication #connect #ms #sql #server,java #code


    First of all, You will need to add a jar file to your project library as SQL Server 2000 Driver for JDBC Service. My target is SQL Server 2000, it will require the jar file called “sqljdbc4.jar”. This is not supported on Microsoft website now, you can download it here. For other versions of SQL Server, here is the link of SQL Server 2000 Driver for JDBC Service.

    The following is the code for connection MS SQL Server and select some records from a testing table.

    You may also like.

    If you want someone to read your code, please put the code inside pre code and /code /pre tags. For example:

    W3Schools Demo
    Resize this responsive page!

    London is the capital city of England.
    It is the most populous city in the United Kingdom,
    with a metropolitan area of over 13 million inhabitants.

    Paris is the capital and most populous city of France.

    Tokyo is the capital of Japan, the center of the Greater Tokyo Area,
    and the most populous metropolitan area in the world.

    try <
    Class.forName( com.mysql.jdbc.Driver );
    connection = DriverManager.getConnection(
    jdbc:mysql://HOST:port_number/DB_name , DB_username , DB_pass );
    st = connection.createStatement();
    rs = st.executeQuery( SELECT * FROM table name WHERE id = 1 );
    > catch (Exception e) <
    System.out.println( DB error. + e);

    Can you give an example how to connect sql into java