Don't configure hadoop.tmp.dir in Spark plugin
During job execution with Swift spark-submit writes some data in
hadoop.tmp.dir. But for executor user there is no permissions for
writing in current directory. So, it's suggested to use default value
for this config options as it's done for all other plugins.
Change-Id: I0e2ec932cfd4fc49cd0f20098badb15c18769e20
Closes-bug: 1524997
(cherry picked from commit 3924d3e3bf
)
This commit is contained in:
parent
4cd7844763
commit
c716f5b723
|
@ -0,0 +1,3 @@
|
|||
---
|
||||
fixes:
|
||||
- Fixed issues with using Swift as an output datasource.
|
|
@ -265,8 +265,6 @@ def generate_xml_configs(configs, storage_path, nn_hostname, hadoop_port):
|
|||
'/dfs/nn'),
|
||||
'dfs.datanode.data.dir': extract_hadoop_path(storage_path,
|
||||
'/dfs/dn'),
|
||||
'hadoop.tmp.dir': extract_hadoop_path(storage_path,
|
||||
'/dfs'),
|
||||
'dfs.hosts': '/etc/hadoop/dn.incl',
|
||||
'dfs.hosts.exclude': '/etc/hadoop/dn.excl'
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue