The author of the blog article you refer to is mainly concerned with the potential complexity that can arise from integrated tests (although it is written in a very opinionated and categorical way). However, integrated tests are not necessarily bad, and some are actually more useful than pure unit tests. It really depends on the context of your application and what you're trying to test.
Many applications today would simply not work at all if their database server went down. At least, think of it in the context of the feature you're trying to test.
On the one hand, if what you're trying to test doesn't depend, or can be made not to depend at all, on the database, then write your test in such a way that it doesn't even try to use the database (just provide mock data as required). For example, if you're trying to test some authentication logic when serving a web page (for example), it's probably a good thing to detach that from the DB altogether (assuming you don't rely on the DB for authentication, or that you can mock it reasonably easily).
On the other hand, if it's a feature that directly relies on your database and that wouldn't work in a real environment at all should the database be unavailable, then mocking what the DB does in your DB client code (i.e. the layer using that DB) doesn't necessarily make sense.
For example, if you know that your application is going to rely on a database (and possibly on a specific database system), mocking the database behaviour for the sake of it often will be a waste of time. Database engines (especially RDBMS) are complex systems. A few lines of SQL can actually perform a lot of work, which would be difficult to simulate (in fact, if your SQL query is a few lines long, chances are you'll need many more lines of Java/PHP/C#/Python code to produce the same result internally): duplicating the logic you've already implemented in the DB doesn't make sense, and checking that test code would then become a problem in itself.
I wouldn't necessarily treat this as a problem of unit test v.s. integrated test, but rather look at the scope of what is being tested.The overall problems of unit and integration testing remain: you need a reasonably realistic set of test data and test cases, but something that is also sufficiently small for the tests to be executed quickly.
The time to reset the database and repopulate with test data is an aspect to consider; you would generally evaluate this against the time it takes to write that mock code (which you would have to maintain too, eventually).
Another point to consider is the degree of dependency your application has with the database.
- If your application simply follows a CRUD model, where you have a layer of abstraction that lets you swap between any RDBMS by the simple means of a configuration setting, chances are you'll be able to work with a mock system quite easily (possibly blurring the line between unit and integrated testing using an in-memory RDBMS).
- If your application uses more complex logic, something that would be specific to one of SQL Server, MySQL, PostgreSQL (for example), then it would generally make more sense to have a test that use that specific system.