More

SQL filter more than 1 table in Cartodb.js

SQL filter more than 1 table in Cartodb.js


I have a button on my map where when user clicks, it will filter 1 cartodb layer with SQL. I want to allow each button click to filter two layers, each sharing the same field and value. Here is the working map

      

If you have multiple layers and you're pulling the data using a viz.json file, then the structure of thelayercallback object is:layer[0]is the basemap,layer[1]are the static layers. There are one to many sublayers inlayers[1]. If you are using createLayer, then the layer object consists of the multiple sublayers.

If you're pulling the data from a viz.json file, your situation is probably like this:

cartodb.createVis('map','vizjson_link') .done(function(vis,layers) { var sublayer0 = layers[1].getSubLayer(0); var sublayer1 = layers[1].getSubLayer(1); $("#change-sublayer0").on('click',function(e) { sublayer0.setSQL(sql_string0); }); $("#change-sublayer1").on('click',function(e) { sublayer1.setSQL(sql_string1); }); });

We also tried to address this in a Map Academy lesson on CartoDB.js: http://academy.cartodb.com/courses/03-cartodbjs-ground-up/lesson-3.html

Edit: There are three sublayers inlayers[1]. Therefore, when you callcreateSelector(layers[1]), you can extract these sublayers like above within createSelector:

var sublayerSup = layer.getSubLayer(0); var sublayerPledge = layer.getSubLayer(1);

You can then change the query applied to each of these in theifstatement you set up:

var querySup = "SELECT * FROM sup_dist_2011"; var queryPledge = "SELECT * FROM pledge"; if(area !== 'all') { querySup = querySup + " where sup_dist_n = " + area; queryPledge = queryPledge + " WHERE pledge = " + otherAttribute; } // change the query in the layer to update the map sublayerSup.setSQL(querySup); sublayerPledge.setSQL(queryPledge);

You should probably also add another data attribute to your list elements so that you can pull additional information from them:

  • All Districts
  • District 1
  • District 2
  • District 3
  • District 4
  • District 5
  • Again withincreateSelectoryou can extract this data like this:

    var pledge = $li.data('pledge');

    By the way, you don't need to use thevar sql = new cartodb.SQL(… )line because you're only interacting with sublayers, not the SQL component of CartoDB.js.

    A little ramble-y, but hope it helps!


    KB4022619 - SQL Server 2014 Service Pack 3 release information

    This article contains important information to read before you install Microsoft SQL Server 2014 Service Pack 3 (SP3). It describes how to get the service pack, the list of fixes that are included in the service pack, known issues, and a list of copyright attributions for the product.

    Note This article serves as a single source of information to locate all documentation that's related to this service pack. It includes all the information that you previously found in the release notes and Readme.txt files.


    Firstly, let's consider the typical problem:

    You have set up an Agent job to run at a scheduled time - let's say overnight. Of course, everything runs fine, until the day that it fails, inexplicably. At this point, like me, you probably right-click the job in question, and select "View History", to open the Log File viewer. Then you expand the details of the failed step (at the top of the list), and read the details in the lower part of the dialog.

    Then the trouble starts because the details are massively truncated, to say the least. In fact they only show the first 1024 characters, and given the extreme verbosity of SSIS (as this is what I was using), this meant that effectively all I got was a preamble to the problem, but nothing of much use.

    Yes, I know that efficient logging in SSIS can give me lots of debugging information, and of course, I have added extensive logging to all my SSIS packages. Yet the consequent time and effort spent digging for the error messages given the inevitable urgency of a data load failure situation prompted me to wish for something simpler, faster and more centralised. After all, when debugging a process invoked by SQL Server Agent, I don't want to have to dig deep into logs specific to the processes that have been invoked as the first port of call.


    41 Essential SQL Interview Questions *

    What does UNION do? What is the difference between UNION and UNION ALL ?

    UNION merges the contents of two structurally-compatible tables into a single combined table. The difference between UNION and UNION ALL is that UNION will omit duplicate records whereas UNION ALL will include duplicate records.

    It is important to note that the performance of UNION ALL will typically be better than UNION , since UNION requires the server to do the additional work of removing any duplicates. So, in cases where is is certain that there will not be any duplicates, or where having duplicates is not a problem, use of UNION ALL would be recommended for performance reasons.

    List and explain the different types of JOIN clauses supported in ANSI-standard SQL.

    ANSI-standard SQL specifies five types of JOIN clauses as follows:

    INNER JOIN (a.k.a. “simple join”): Returns all rows for which there is at least one match in BOTH tables. This is the default type of join if no specific JOIN type is specified.

    LEFT JOIN (or LEFT OUTER JOIN ): Returns all rows from the left table, and the matched rows from the right table i.e., the results will contain all records from the left table, even if the JOIN condition doesn’t find any matching records in the right table. This means that if the ON clause doesn’t match any records in the right table, the JOIN will still return a row in the result for that record in the left table, but with NULL in each column from the right table.

    RIGHT JOIN (or RIGHT OUTER JOIN ): Returns all rows from the right table, and the matched rows from the left table. This is the exact opposite of a LEFT JOIN i.e., the results will contain all records from the right table, even if the JOIN condition doesn’t find any matching records in the left table. This means that if the ON clause doesn’t match any records in the left table, the JOIN will still return a row in the result for that record in the right table, but with NULL in each column from the left table.

    FULL JOIN (or FULL OUTER JOIN ): Returns all rows for which there is a match in EITHER of the tables. Conceptually, a FULL JOIN combines the effect of applying both a LEFT JOIN and a RIGHT JOIN i.e., its result set is equivalent to performing a UNION of the results of left and right outer queries.

    CROSS JOIN : Returns all records where each row from the first table is combined with each row from the second table (i.e., returns the Cartesian product of the sets of rows from the joined tables). Note that a CROSS JOIN can either be specified using the CROSS JOIN syntax (“explicit join notation”) or (b) listing the tables in the FROM clause separated by commas without using a WHERE clause to supply join criteria (“implicit join notation”).

    Given the following tables:

    What will be the result of the query below?

    Explain your answer and also provide an alternative version of this query that will avoid the issue that it exposes.


    Before You Begin

    Limitations and Restrictions

    The actual number of query requests can exceed the value set in max worker threads in which case SQL Server pools the worker threads so that the next available worker thread can handle the request. A worker thread is assigned only to active requests and is released once the request is serviced. This happens even if the user session/connection on which the request was made remains open.

    The max worker threads server configuration option does not limit all threads that may be spawned inside the engine. System threads required for tasks such as LazyWriter, Checkpoint, Log Writer, Service Broker, Lock Manager, or others are spawned outside this limit. Availability Groups use some of the worker threads from within the max worker thread limit but also use system threads (see Thread Usage by Availability Groups ) If the number of threads configured is being exceeded, the following query will provide information about the system tasks that have spawned the additional threads.

    Recommendations

    This option is an advanced option and should be changed only by an experienced database administrator or certified SQL Server professional. If you suspect that there is a performance problem, it is probably not the availability of worker threads. The cause is more likely related to activies that occupy the worker threads and do not release them. Examples include long-running queries or bottlenecks on the system (I/O, blocking, latch waits, network waits) that cause long-waiting queries. It is best to find the root cause of a performance issue before you change the max worker threads setting. For more information on assessing performance, see Monitor and tune for performance.

    Thread pooling helps optimize performance when a large number of clients connect to the server. Usually, a separate operating system thread is created for each query request. However, with hundreds of connections to the server, using one thread per query request can consume large amounts of system resources. The max worker threads option enables SQL Server to create a pool of worker threads to service a larger number of query requests, which improves performance.

    The following table shows the automatically configured number of max worker threads (when value is set to 0) based on various combinations of CPUs, computer architecture, and versions of SQL Server, using the formula: Default Max Workers + ((logical CPUs - 4) * Workers per CPU).

    Number of CPUs 32-bit computer (up to SQL Server 2014 (12.x)) 64-bit computer (up to SQL Server 2016 (13.x) SP1) 64-bit computer (starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x))
    <= 4 256 512 512
    8 288 576 576
    16 352 704 704
    32 480 960 960
    64 736 1472 1472
    128 1248 2496 4480
    256 2272 4544 8576

    Up to SQL Server 2016 (13.x) SP1, the Workers per CPU only depend on the architecture (32-bit or 64-bit):

    Number of CPUs 32-bit computer 1 64-bit computer
    <= 4 256 512
    > 4 256 + ((logical CPU's - 4) * 8) 512 2 + ((logical CPU's - 4) * 16)

    Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x), the Workers per CPU depend on the architecture and number of processors (between 4 and 64, or greater than 64):

    Number of CPUs 32-bit computer 1 64-bit computer
    <= 4 256 512
    > 4 and <= 64 256 + ((logical CPU's - 4) * 8) 512 2 + ((logical CPU's - 4) * 16)
    > 64 256 + ((logical CPU's - 4) * 32) 512 2 + ((logical CPU's - 4) * 32)

    1 Starting with SQL Server 2016 (13.x), SQL Server can no longer be installed on a 32-bit operating system. 32-bit computer values are listed for the assistance of customers running SQL Server 2014 (12.x) and earlier. We recommend 1,024 as the maximum number of worker threads for an instance of SQL Server that is running on a 32-bit computer.

    2 Starting with SQL Server 2017 (14.x), the Default Max Workers value is divided by 2 for machines with less than 2GB of memory.

    When all worker threads are active with long running queries, SQL Server might appear unresponsive until a worker thread completes and becomes available. Although this is not a defect, it can sometimes be undesirable. If a process appears to be unresponsive and no new queries can be processed, then connect to SQL Server using the dedicated administrator connection (DAC), and kill the process. To prevent this, increase the number of max worker threads.

    Security

    Permissions

    Execute permissions on sp_configure with no parameters or with only the first parameter are granted to all users by default. To execute sp_configure with both parameters to change a configuration option or to run the RECONFIGURE statement, a user must be granted the ALTER SETTINGS server-level permission. The ALTER SETTINGS permission is implicitly held by the sysadmin and serveradmin fixed server roles.


    Recommendations and guidelines for improving FILESTREAM performance

    The SQL Server FILESTREAM feature allow you to store varbinary(max) binary large object data as files in the file system. When you have a large number of rows in FILESTREAM containers, which are the underlying storage for both FILESTREAM columns and FileTables, you can end up with a file system volume that contains large number of files. To achieve best performance when processing the integrated data from the database as well as the file system, it is important to ensure the file system is tuned optimally. The following are some of the tuning options that are available from a file system perspective:

    Altitude check for the SQL Server FILESTREAM filter driver [e.g. rsfx0100.sys]. Evaluate all the filter drivers loaded for the storage stack associated with a volume where the FILESTREAM feature stores files and make sure that rsfx driver is located at the bottom of the stack. You can use the FLTMC.EXE control program to enumerate the filter drivers for a specific volume. Here is a sample output from the FLTMC utility: C:WindowsSystem32>fltMC.exe filters

    Check that the server has the "last access time" property disabled for the files. This file system attribute is maintained in the registry:
    Key Name: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem
    Name: NtfsDisableLastAccessUpdate
    Type: REG_DWORD
    Value: 1

    Check that the server has 8.3 naming disabled. This file system attribute is maintained in the registry:
    Key Name: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem
    Name: NtfsDisable8dot3NameCreation
    Type: REG_DWORD
    Value: 1

    Check that the FILESTREAM directory containers do not have file system encryption or file system compression enabled, as these can introduce a level of overhead when accessing these files.

    From an elevated command prompt, run fltmc instances and make sure that no filter drivers are attached to the volume where you try to restore.

    Check that FILESTREAM directory containers do not have more than 300,000 files. You can use the information from sys.database_files catalog view to find out which directories in the file system store FILESTREAM-related files. This can be prevented by having multiple containers. (See the next bullet item for more information.)

    With only one FILESTREAM filegroup, all data files are created under the same folder. File creation of very large numbers of files may be impacted by large NTFS indices, which can also become fragmented.

    Having multiple filegroups generally should help with this (the application uses partitioning or has multiple tables, each going to its own filegroup).

    With SQL Server 2012 and later versions, you can have multiple containers or files under a FILESTREAM filegroup, and a round-robin allocation scheme will apply. Therefore the number of NTFS files per directory will get smaller.

    Backup and restore can become faster with multiple FILESTREAM containers, if multiple volumes storing containers are used.

    SQL Server 2012 supports multiple containers per filegroup and can make things much easier. No complicated partitioning schemes may be needed to manage larger number of files.

    The NTFS MFT may become fragmented, and that can cause performance issues. The MFT reserved size does depend on volume size, so you may or may not encounter this.

    You can check the MFT fragmentation with defrag /A /V C: (change C: to the actual volume name).

    You can reserve more MFT space by using fsutil behavior set mftzone 2.

    FILESTREAM data files should be excluded from antivirus software scanning.

    Windows Server 2016 automatically enables Windows Defender. Make sure that Windows Defender is configured to exclude Filestream files. Failure to do this can result in decreased performance for backup and restore operations.


    A few comments on the .csv import method

    • I typed COPY and not just COPY because my SQL user doesn’t have SUPERUSER privileges, so technically I could not use the COPY command (this is an SQL thing). Typing COPY instead is the simplest workaround — but the best solution would be to give yourself SUPERUSER privileges then use the original COPY command. (In this video starting at 2:55 I show how to give SUPERUSER privileges to your SQL user.)
    • Why we didn’t do the COPY in our SQL manager tool? Same reason: if you don’t have SUPERUSER privileges, you can’t run the COPY command from an SQL tool, only from the command line. If you follow the video that I linked in the previous point, you will be able to run the same COPY statement from pgadmin or SQL Workbench.
    • The '/home/dataguy/test_results.csv' is the location of the file and the name of the file, together. Again, we found out the location by using the pwd command in the right folder.
    • And finally: if you are not comfortable with these command line steps, read the first few articles from my Bash for Data Analytics article series.

    And boom, the data is copied from a csv file into our SQL table.
    Run this query from your SQL query tool:

    SELECT * FROM test_results


    How to use expressions in SQL SELECT Statement?

    Expressions combine many arithmetic operators, they can be used in SELECT, WHERE and ORDER BY Clauses of the SQL SELECT Statement.

    Here we will explain how to use expressions in the SQL SELECT Statement. About using expressions in WHERE and ORDER BY clause, they will be explained in their respective sections.

    The operators are evaluated in a specific order of precedence, when more than one arithmetic operator is used in an expression. The order of evaluation is: parentheses, division, multiplication, addition, and subtraction. The evaluation is performed from the left to the right of the expression.


    Threat Modeling

    • SQL injection attacks allow attackers to spoof identity, tamper with existing data, cause repudiation issues such as voiding transactions or changing balances, allow the complete disclosure of all data on the system, destroy the data or make it otherwise unavailable, and become administrators of the database server.
    • SQL Injection is very common with PHP and ASP applications due to the prevalence of older functional interfaces. Due to the nature of programmatic interfaces available, J2EE and ASP.NET applications are less likely to have easily exploited SQL injections.
    • The severity of SQL Injection attacks is limited by the attacker’s skill and imagination, and to a lesser extent, defense in depth countermeasures, such as low privilege connections to the database server and so on. In general, consider SQL Injection a high impact severity.

    SQL COUNT ALL example

    Let’s take a look at the customers table.

    To count all customers, you use the following query:

    The following query returns the number of countries except for the NULL values:


    Watch the video: Online Spatial Data Visualisation with CARTO Builder Workshop Absolute Beginner Level