We’re back with the next installment in Season 4 of Real World Fake Data and a collaboration with Back2Viz Basics, and this one’s tailor-made for ops analysts, workforce planners, and CX nerds. Say hello to Call Center Metrics, a data-rich, operationally grounded, scenario-flexible dataset that gives you all the tools to build dashboards that feel like the real deal.
What’s Inside?
This dataset simulates detailed call center operations, including individual call records, agent performance, and customer interactions. The structure mirrors what you’d find in real-world systems like NICE, Genesys, or Avaya but with none of the access headaches.
Key Dimensions:
Call ID,Customer ID,Agent IDStart Timestamp,End TimestampCall Type– Inbound, Outbound, Follow-Up, etc.Resolution Status– Resolved, Escalated, DroppedChannel– Voice, Chat, EmailLanguage,Region,Customer Segment
Performance Metrics:
Call DurationWait TimeHold TimeTransfer CountFirst Call Resolution (FCR)CSAT ScoreSentiment Score(modeled for NLP use cases)
You’ll also find agent-level metadata and scheduling fields for workforce-style visualizations.
Need Ideas? Here are some Use Cases
This dataset is ideal for:
- Call Volume Dashboards by hour, agent, or channel
- Resolution Funnel Analysis
- Sentiment Over Time Visualizations
- CSAT + FCR KPI Tracking
- Interactive QA Review Simulations
- Supervisor Scorecards
- Headcount & Shift Planning Experiments
Want a Challenge?
Feeling ambitious? Try these RWFD-style build challenges:
- A “red flag” escalation dashboard with conditional alerts
- Dynamic agent scorecards with filters for call type and region
- Call time heatmaps by day of week/hour of day
- A sentiment timeline with annotations for spikes
- FCR vs CSAT correlation visual using scatterplots and filters
Ready to Dive In?
You can grab the Season 4 – Dataset #2 (Call Center Performance) HERE!
Don’t forget to tag your work: #RWFD #B2VB and #Tableau
Hi there,
I am working on a dashboard with this month’s call center data and found a lot of situations where the data looks heavy on the “fake” and not really conducive of a real world scenario. Is there an opportunity for you guys to refresh the dataset and account for any of the errors I have found (below)? –
1) Unique customer IDs for every record
2) Unique agent IDs and names for every record
3) CSAT responses for every record / 100% CSAT response rate
4) Fake locations
5) Distributions among every dimension are very evenly distributed.
6) Same with MoM changes across KPIs.
Thanks for everything you guys are doing here!!