When Leibniz created his dy/dx notation, he did mean for dy and dx to be numbers - infinitesimal numbers, that is. An infinitesimal number is a number that's bigger than zero but smaller than every positive real number. Think about that for a minute. If ε is infinitesimal, then ε is greater than zero but smaller than 0.00001, smaller than 0.000001, smaller than 0.00000000000001, etc. No matter how many zeros there are, ε < 0.00...001
Mathematicians had some trouble with the idea of infinitesimals, because it seemed too imprecise and fuzzy. They used limits instead, and we still define continuity and derivatives in terms of limits. It wasn't until 1966 that a guy named Robinson wrote a book called "Non-standard Analysis" and convinced people that yes, we could do calculus properly with these weird infinitesimal numbers.