Our client is a thriving international media network
- Ensure integrity of media databases
- Ensure databases scale and remain responsive to their users
- Scope and define requirements in terms of new feeds
- Create data infrastructure to support big data projects
- Create and maintain databases to capture internal media delivery
- Create and maintain connectors to ensure the flow of data from partners to our databases
- Create and maintain feeds from these databases to BI providers
Who are we looking for?
We are looking for someone experienced in taking structured data from A to B, moving data to defined structures through APIs and scheduled reports (more than likely FTP and possibly emailed reports).
This person’s job will be to create, curate and maintain a media delivery and performance database. This has not yet been created and can be created in the Microsoft Stack or a SQL alternative.
We envisage this database to exist on a cloud (Azure \ AWS NOT Google Compute), connected via a stable VPN connection.
There will be multiple databases housing data with very similar structures for each client in our agency. Therefore we need a modular approach to obtaining the data. Creating and maintaining this database will require:
- Classic ETL
- User admin
- OLAP an advantage but not necessary
The person will need to be able to draft requirements for incoming data feeds and setup systems that will automatically alert team members when import processes have failed. They will also need to communicate to non-technical team members around issues and potential changes.
We will also be looking at some large but well-structured datasets (1bn rows of data per day) which we will be exploring using NoSQL solutions. Experience in this area will be beneficial but is absolutely not essential.
In summary this job is for an independent minded DBA who is interested in exploring what an organisation can do once it has structured data and is also interested in playing within the Big Data space.