A requirement arises in many systems to update multiple SQL database rows.
For small numbers of rows requiring updates, it can be adequate to use an UPDATE statement for each row that requires an update.
But in many cases this only provides a modest improvement as each UPDATE operation still requires a round-trip communication with the database server.
In the case where the application server and database server are on different hosts, the round-trip will involve network latency as well.
So we could think in terms of creating a re-usable module which would implement that logic.
This is the intention of UPDATE staff SET salary = 1200 WHERE name = ' Bob'; UPDATE staff SET salary = 1200 WHERE name = ' Jane'; UPDATE staff SET salary = 1200 WHERE name = ' Frank'; UPDATE staff SET salary = 1200 WHERE name = ' Susan'; UPDATE staff SET salary = 1200 WHERE name = ' John'; “key_columns” specifies the columns which will be used to identify rows which need to be updated (using WHERE). The first element provides the value of the column (specified by “key_columns”) to identify the row to be updated.
and we could persuade the database server to apply those updates to the target table?
This is in fact entirely possible in many database systems.
So, given a list of updates to apply we could effect them using the following steps: So in the example above we can reduce five statements to four. But now the number of statements is no longer directly dependent on the number of rows requiring updates.
Even if we wanted to update a thousand rows with different values, we could still do it with four statements.
With each update, abandoned or infrequently updated blogs are eliminated, data is crunched and each blog is assigned a score representative of general content quality, popularity and buzz using link and social media sharing data.