There are no dependencies of Spark on Hadoop. So, you can use Spark without Hadoop but you'll not be able to use some functionalities that are dependent on Hadoop. Spark can basically run over any distributed file system,it doesn't necessarily have to be Hadoop.
Spark doesn’t have it’s own storage system.So, it is dependent on other Storage facilities like cassandra, hdfs, s3 etc.
Although it is better to run Spark with Hadoop, you can run Spark without Hadoop in stand-alone mode.You can refer to Spark Documentation for more details.